00:00:00.000 Started by upstream project "autotest-per-patch" build number 132295 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.126 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.127 The recommended git tool is: git 00:00:00.127 using credential 00000000-0000-0000-0000-000000000002 00:00:00.130 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.205 Fetching changes from the remote Git repository 00:00:00.206 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.276 Using shallow fetch with depth 1 00:00:00.276 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.276 > git --version # timeout=10 00:00:00.329 > git --version # 'git version 2.39.2' 00:00:00.330 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.360 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.360 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.405 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.415 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.426 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.426 > git config core.sparsecheckout # timeout=10 00:00:07.437 > git read-tree -mu HEAD # timeout=10 00:00:07.451 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.468 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.468 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.557 [Pipeline] Start of Pipeline 00:00:07.570 [Pipeline] library 00:00:07.572 Loading library shm_lib@master 00:00:07.572 Library shm_lib@master is cached. Copying from home. 00:00:07.584 [Pipeline] node 00:00:07.590 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.594 [Pipeline] { 00:00:07.602 [Pipeline] catchError 00:00:07.603 [Pipeline] { 00:00:07.611 [Pipeline] wrap 00:00:07.617 [Pipeline] { 00:00:07.625 [Pipeline] stage 00:00:07.627 [Pipeline] { (Prologue) 00:00:07.645 [Pipeline] echo 00:00:07.647 Node: VM-host-SM0 00:00:07.654 [Pipeline] cleanWs 00:00:07.665 [WS-CLEANUP] Deleting project workspace... 00:00:07.665 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.669 [WS-CLEANUP] done 00:00:07.868 [Pipeline] setCustomBuildProperty 00:00:07.931 [Pipeline] httpRequest 00:00:08.427 [Pipeline] echo 00:00:08.429 Sorcerer 10.211.164.101 is alive 00:00:08.437 [Pipeline] retry 00:00:08.438 [Pipeline] { 00:00:08.452 [Pipeline] httpRequest 00:00:08.456 HttpMethod: GET 00:00:08.457 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.457 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.475 Response Code: HTTP/1.1 200 OK 00:00:08.476 Success: Status code 200 is in the accepted range: 200,404 00:00:08.476 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:28.861 [Pipeline] } 00:00:28.880 [Pipeline] // retry 00:00:28.889 [Pipeline] sh 00:00:29.169 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:29.185 [Pipeline] httpRequest 00:00:29.630 [Pipeline] echo 00:00:29.632 Sorcerer 10.211.164.101 is alive 00:00:29.642 [Pipeline] retry 00:00:29.644 [Pipeline] { 00:00:29.659 [Pipeline] httpRequest 00:00:29.664 HttpMethod: GET 00:00:29.665 URL: http://10.211.164.101/packages/spdk_dec6d38430cf0927c9d59eb0ba816b99c261d5fc.tar.gz 00:00:29.665 Sending request to url: http://10.211.164.101/packages/spdk_dec6d38430cf0927c9d59eb0ba816b99c261d5fc.tar.gz 00:00:29.686 Response Code: HTTP/1.1 200 OK 00:00:29.687 Success: Status code 200 is in the accepted range: 200,404 00:00:29.688 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_dec6d38430cf0927c9d59eb0ba816b99c261d5fc.tar.gz 00:01:53.011 [Pipeline] } 00:01:53.029 [Pipeline] // retry 00:01:53.036 [Pipeline] sh 00:01:53.347 + tar --no-same-owner -xf spdk_dec6d38430cf0927c9d59eb0ba816b99c261d5fc.tar.gz 00:01:56.650 [Pipeline] sh 00:01:56.932 + git -C spdk log --oneline -n5 00:01:56.932 dec6d3843 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:01:56.932 4b2d483c6 dif: Add spdk_dif_pi_format_get_pi_size() to use for NVMe PRACT 00:01:56.932 560a1dde3 bdev/malloc: Support accel sequence when DIF is enabled 00:01:56.932 30279d1cf bdev: Add spdk_bdev_io_has_no_metadata() for bdev modules 00:01:56.932 4bd31eb0a bdev/malloc: Extract internal of verify_pi() for code reuse 00:01:56.949 [Pipeline] writeFile 00:01:56.963 [Pipeline] sh 00:01:57.246 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:57.259 [Pipeline] sh 00:01:57.540 + cat autorun-spdk.conf 00:01:57.540 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.540 SPDK_TEST_NVMF=1 00:01:57.540 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.540 SPDK_TEST_URING=1 00:01:57.540 SPDK_TEST_USDT=1 00:01:57.540 SPDK_RUN_UBSAN=1 00:01:57.540 NET_TYPE=virt 00:01:57.540 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.547 RUN_NIGHTLY=0 00:01:57.549 [Pipeline] } 00:01:57.563 [Pipeline] // stage 00:01:57.579 [Pipeline] stage 00:01:57.581 [Pipeline] { (Run VM) 00:01:57.595 [Pipeline] sh 00:01:57.877 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:57.877 + echo 'Start stage prepare_nvme.sh' 00:01:57.877 Start stage prepare_nvme.sh 00:01:57.877 + [[ -n 3 ]] 00:01:57.877 + disk_prefix=ex3 00:01:57.877 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:57.877 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:57.877 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:57.877 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.877 ++ SPDK_TEST_NVMF=1 00:01:57.877 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.877 ++ SPDK_TEST_URING=1 00:01:57.877 ++ SPDK_TEST_USDT=1 00:01:57.877 ++ SPDK_RUN_UBSAN=1 00:01:57.877 ++ NET_TYPE=virt 00:01:57.877 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.877 ++ RUN_NIGHTLY=0 00:01:57.877 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:57.877 + nvme_files=() 00:01:57.877 + declare -A nvme_files 00:01:57.877 + backend_dir=/var/lib/libvirt/images/backends 00:01:57.877 + nvme_files['nvme.img']=5G 00:01:57.877 + nvme_files['nvme-cmb.img']=5G 00:01:57.877 + nvme_files['nvme-multi0.img']=4G 00:01:57.877 + nvme_files['nvme-multi1.img']=4G 00:01:57.877 + nvme_files['nvme-multi2.img']=4G 00:01:57.877 + nvme_files['nvme-openstack.img']=8G 00:01:57.877 + nvme_files['nvme-zns.img']=5G 00:01:57.877 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:57.877 + (( SPDK_TEST_FTL == 1 )) 00:01:57.877 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:57.877 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:57.877 + for nvme in "${!nvme_files[@]}" 00:01:57.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:57.877 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:57.877 + for nvme in "${!nvme_files[@]}" 00:01:57.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:57.877 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:57.877 + for nvme in "${!nvme_files[@]}" 00:01:57.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:57.877 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:57.877 + for nvme in "${!nvme_files[@]}" 00:01:57.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:57.877 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:57.877 + for nvme in "${!nvme_files[@]}" 00:01:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:57.878 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:57.878 + for nvme in "${!nvme_files[@]}" 00:01:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:57.878 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:58.136 + for nvme in "${!nvme_files[@]}" 00:01:58.136 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:58.136 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:58.136 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:58.136 + echo 'End stage prepare_nvme.sh' 00:01:58.136 End stage prepare_nvme.sh 00:01:58.148 [Pipeline] sh 00:01:58.438 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:58.438 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:01:58.438 00:01:58.438 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:58.438 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:58.438 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:58.438 HELP=0 00:01:58.438 DRY_RUN=0 00:01:58.438 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:58.438 NVME_DISKS_TYPE=nvme,nvme, 00:01:58.438 NVME_AUTO_CREATE=0 00:01:58.438 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:58.438 NVME_CMB=,, 00:01:58.438 NVME_PMR=,, 00:01:58.438 NVME_ZNS=,, 00:01:58.438 NVME_MS=,, 00:01:58.438 NVME_FDP=,, 00:01:58.438 SPDK_VAGRANT_DISTRO=fedora39 00:01:58.438 SPDK_VAGRANT_VMCPU=10 00:01:58.438 SPDK_VAGRANT_VMRAM=12288 00:01:58.438 SPDK_VAGRANT_PROVIDER=libvirt 00:01:58.438 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:58.438 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:58.438 SPDK_OPENSTACK_NETWORK=0 00:01:58.438 VAGRANT_PACKAGE_BOX=0 00:01:58.438 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:58.438 FORCE_DISTRO=true 00:01:58.438 VAGRANT_BOX_VERSION= 00:01:58.438 EXTRA_VAGRANTFILES= 00:01:58.438 NIC_MODEL=e1000 00:01:58.438 00:01:58.438 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:58.438 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:01.724 Bringing machine 'default' up with 'libvirt' provider... 00:02:02.657 ==> default: Creating image (snapshot of base box volume). 00:02:02.657 ==> default: Creating domain with the following settings... 00:02:02.657 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731666327_bfc421c8ebf0b2f3b1d4 00:02:02.657 ==> default: -- Domain type: kvm 00:02:02.657 ==> default: -- Cpus: 10 00:02:02.657 ==> default: -- Feature: acpi 00:02:02.657 ==> default: -- Feature: apic 00:02:02.657 ==> default: -- Feature: pae 00:02:02.657 ==> default: -- Memory: 12288M 00:02:02.657 ==> default: -- Memory Backing: hugepages: 00:02:02.657 ==> default: -- Management MAC: 00:02:02.657 ==> default: -- Loader: 00:02:02.657 ==> default: -- Nvram: 00:02:02.657 ==> default: -- Base box: spdk/fedora39 00:02:02.657 ==> default: -- Storage pool: default 00:02:02.657 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731666327_bfc421c8ebf0b2f3b1d4.img (20G) 00:02:02.657 ==> default: -- Volume Cache: default 00:02:02.657 ==> default: -- Kernel: 00:02:02.657 ==> default: -- Initrd: 00:02:02.657 ==> default: -- Graphics Type: vnc 00:02:02.657 ==> default: -- Graphics Port: -1 00:02:02.657 ==> default: -- Graphics IP: 127.0.0.1 00:02:02.657 ==> default: -- Graphics Password: Not defined 00:02:02.657 ==> default: -- Video Type: cirrus 00:02:02.657 ==> default: -- Video VRAM: 9216 00:02:02.657 ==> default: -- Sound Type: 00:02:02.657 ==> default: -- Keymap: en-us 00:02:02.657 ==> default: -- TPM Path: 00:02:02.657 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:02.657 ==> default: -- Command line args: 00:02:02.657 ==> default: -> value=-device, 00:02:02.657 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:02.657 ==> default: -> value=-drive, 00:02:02.657 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:02:02.657 ==> default: -> value=-device, 00:02:02.657 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.657 ==> default: -> value=-device, 00:02:02.657 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:02.657 ==> default: -> value=-drive, 00:02:02.657 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:02.657 ==> default: -> value=-device, 00:02:02.657 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.657 ==> default: -> value=-drive, 00:02:02.657 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:02.657 ==> default: -> value=-device, 00:02:02.657 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.657 ==> default: -> value=-drive, 00:02:02.657 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:02.657 ==> default: -> value=-device, 00:02:02.657 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.916 ==> default: Creating shared folders metadata... 00:02:02.916 ==> default: Starting domain. 00:02:04.817 ==> default: Waiting for domain to get an IP address... 00:02:22.898 ==> default: Waiting for SSH to become available... 00:02:22.898 ==> default: Configuring and enabling network interfaces... 00:02:25.458 default: SSH address: 192.168.121.217:22 00:02:25.458 default: SSH username: vagrant 00:02:25.458 default: SSH auth method: private key 00:02:27.989 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:36.125 ==> default: Mounting SSHFS shared folder... 00:02:37.079 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:37.079 ==> default: Checking Mount.. 00:02:38.455 ==> default: Folder Successfully Mounted! 00:02:38.455 ==> default: Running provisioner: file... 00:02:39.401 default: ~/.gitconfig => .gitconfig 00:02:39.666 00:02:39.666 SUCCESS! 00:02:39.666 00:02:39.666 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:39.666 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:39.666 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:39.666 00:02:39.675 [Pipeline] } 00:02:39.691 [Pipeline] // stage 00:02:39.701 [Pipeline] dir 00:02:39.701 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:39.703 [Pipeline] { 00:02:39.716 [Pipeline] catchError 00:02:39.718 [Pipeline] { 00:02:39.730 [Pipeline] sh 00:02:40.010 + vagrant ssh-config --host vagrant 00:02:40.010 + sed -ne /^Host/,$p 00:02:40.010 + tee ssh_conf 00:02:43.299 Host vagrant 00:02:43.299 HostName 192.168.121.217 00:02:43.299 User vagrant 00:02:43.299 Port 22 00:02:43.299 UserKnownHostsFile /dev/null 00:02:43.299 StrictHostKeyChecking no 00:02:43.299 PasswordAuthentication no 00:02:43.299 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:43.299 IdentitiesOnly yes 00:02:43.299 LogLevel FATAL 00:02:43.299 ForwardAgent yes 00:02:43.299 ForwardX11 yes 00:02:43.299 00:02:43.312 [Pipeline] withEnv 00:02:43.315 [Pipeline] { 00:02:43.328 [Pipeline] sh 00:02:43.608 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:43.608 source /etc/os-release 00:02:43.608 [[ -e /image.version ]] && img=$(< /image.version) 00:02:43.608 # Minimal, systemd-like check. 00:02:43.608 if [[ -e /.dockerenv ]]; then 00:02:43.608 # Clear garbage from the node's name: 00:02:43.608 # agt-er_autotest_547-896 -> autotest_547-896 00:02:43.608 # $HOSTNAME is the actual container id 00:02:43.608 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:43.608 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:43.608 # We can assume this is a mount from a host where container is running, 00:02:43.608 # so fetch its hostname to easily identify the target swarm worker. 00:02:43.608 container="$(< /etc/hostname) ($agent)" 00:02:43.608 else 00:02:43.608 # Fallback 00:02:43.608 container=$agent 00:02:43.608 fi 00:02:43.608 fi 00:02:43.608 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:43.608 00:02:43.878 [Pipeline] } 00:02:43.894 [Pipeline] // withEnv 00:02:43.903 [Pipeline] setCustomBuildProperty 00:02:43.919 [Pipeline] stage 00:02:43.922 [Pipeline] { (Tests) 00:02:43.939 [Pipeline] sh 00:02:44.219 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:44.508 [Pipeline] sh 00:02:44.787 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:45.060 [Pipeline] timeout 00:02:45.060 Timeout set to expire in 1 hr 0 min 00:02:45.063 [Pipeline] { 00:02:45.078 [Pipeline] sh 00:02:45.359 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:45.926 HEAD is now at dec6d3843 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:02:45.938 [Pipeline] sh 00:02:46.219 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:46.491 [Pipeline] sh 00:02:46.771 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:47.047 [Pipeline] sh 00:02:47.327 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:47.587 ++ readlink -f spdk_repo 00:02:47.587 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:47.587 + [[ -n /home/vagrant/spdk_repo ]] 00:02:47.587 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:47.587 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:47.587 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:47.587 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:47.587 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:47.587 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:47.587 + cd /home/vagrant/spdk_repo 00:02:47.587 + source /etc/os-release 00:02:47.587 ++ NAME='Fedora Linux' 00:02:47.587 ++ VERSION='39 (Cloud Edition)' 00:02:47.587 ++ ID=fedora 00:02:47.587 ++ VERSION_ID=39 00:02:47.587 ++ VERSION_CODENAME= 00:02:47.587 ++ PLATFORM_ID=platform:f39 00:02:47.587 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:47.587 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:47.587 ++ LOGO=fedora-logo-icon 00:02:47.587 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:47.587 ++ HOME_URL=https://fedoraproject.org/ 00:02:47.587 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:47.587 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:47.587 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:47.587 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:47.587 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:47.587 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:47.587 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:47.587 ++ SUPPORT_END=2024-11-12 00:02:47.587 ++ VARIANT='Cloud Edition' 00:02:47.587 ++ VARIANT_ID=cloud 00:02:47.587 + uname -a 00:02:47.587 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:47.587 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:47.845 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:48.105 Hugepages 00:02:48.105 node hugesize free / total 00:02:48.105 node0 1048576kB 0 / 0 00:02:48.105 node0 2048kB 0 / 0 00:02:48.105 00:02:48.105 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:48.105 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:48.105 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:48.105 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:48.105 + rm -f /tmp/spdk-ld-path 00:02:48.105 + source autorun-spdk.conf 00:02:48.105 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:48.105 ++ SPDK_TEST_NVMF=1 00:02:48.105 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:48.105 ++ SPDK_TEST_URING=1 00:02:48.105 ++ SPDK_TEST_USDT=1 00:02:48.105 ++ SPDK_RUN_UBSAN=1 00:02:48.105 ++ NET_TYPE=virt 00:02:48.105 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:48.105 ++ RUN_NIGHTLY=0 00:02:48.105 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:48.105 + [[ -n '' ]] 00:02:48.105 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:48.105 + for M in /var/spdk/build-*-manifest.txt 00:02:48.105 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:48.105 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:48.105 + for M in /var/spdk/build-*-manifest.txt 00:02:48.105 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:48.105 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:48.105 + for M in /var/spdk/build-*-manifest.txt 00:02:48.105 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:48.105 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:48.105 ++ uname 00:02:48.105 + [[ Linux == \L\i\n\u\x ]] 00:02:48.105 + sudo dmesg -T 00:02:48.105 + sudo dmesg --clear 00:02:48.105 + dmesg_pid=5263 00:02:48.105 + [[ Fedora Linux == FreeBSD ]] 00:02:48.105 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:48.105 + sudo dmesg -Tw 00:02:48.105 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:48.105 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:48.105 + [[ -x /usr/src/fio-static/fio ]] 00:02:48.105 + export FIO_BIN=/usr/src/fio-static/fio 00:02:48.105 + FIO_BIN=/usr/src/fio-static/fio 00:02:48.105 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:48.105 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:48.105 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:48.105 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:48.105 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:48.105 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:48.105 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:48.105 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:48.105 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:48.368 10:26:13 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:48.368 10:26:13 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:48.368 10:26:13 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:48.368 10:26:13 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:48.368 10:26:13 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:48.368 10:26:13 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:48.368 10:26:13 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:48.368 10:26:13 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:48.368 10:26:13 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:48.368 10:26:13 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:48.368 10:26:13 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:48.368 10:26:13 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:48.368 10:26:13 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:48.368 10:26:13 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:48.368 10:26:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:48.368 10:26:13 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:48.368 10:26:13 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:48.368 10:26:13 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:48.368 10:26:13 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:48.368 10:26:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.368 10:26:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.368 10:26:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.368 10:26:13 -- paths/export.sh@5 -- $ export PATH 00:02:48.368 10:26:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.368 10:26:13 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:48.368 10:26:13 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:48.368 10:26:13 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731666373.XXXXXX 00:02:48.368 10:26:13 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731666373.Zn7XkL 00:02:48.368 10:26:13 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:48.368 10:26:13 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:48.368 10:26:13 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:48.368 10:26:13 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:48.368 10:26:13 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:48.368 10:26:13 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:48.368 10:26:13 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:48.368 10:26:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.368 10:26:13 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:48.368 10:26:13 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:48.368 10:26:13 -- pm/common@17 -- $ local monitor 00:02:48.368 10:26:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.368 10:26:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.368 10:26:13 -- pm/common@25 -- $ sleep 1 00:02:48.368 10:26:13 -- pm/common@21 -- $ date +%s 00:02:48.368 10:26:13 -- pm/common@21 -- $ date +%s 00:02:48.368 10:26:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731666373 00:02:48.368 10:26:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731666373 00:02:48.368 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731666373_collect-cpu-load.pm.log 00:02:48.368 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731666373_collect-vmstat.pm.log 00:02:49.304 10:26:14 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:49.304 10:26:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:49.304 10:26:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:49.304 10:26:14 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:49.304 10:26:14 -- spdk/autobuild.sh@16 -- $ date -u 00:02:49.304 Fri Nov 15 10:26:14 AM UTC 2024 00:02:49.304 10:26:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:49.304 v25.01-pre-211-gdec6d3843 00:02:49.304 10:26:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:49.304 10:26:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:49.304 10:26:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:49.304 10:26:14 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:49.304 10:26:14 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:49.304 10:26:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:49.304 ************************************ 00:02:49.304 START TEST ubsan 00:02:49.304 ************************************ 00:02:49.304 using ubsan 00:02:49.304 10:26:14 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:49.304 00:02:49.304 real 0m0.000s 00:02:49.304 user 0m0.000s 00:02:49.304 sys 0m0.000s 00:02:49.304 10:26:14 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:49.304 10:26:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:49.304 ************************************ 00:02:49.304 END TEST ubsan 00:02:49.304 ************************************ 00:02:49.565 10:26:14 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:49.565 10:26:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:49.565 10:26:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:49.565 10:26:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:49.565 10:26:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:49.565 10:26:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:49.565 10:26:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:49.565 10:26:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:49.565 10:26:14 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:49.565 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:49.565 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:50.132 Using 'verbs' RDMA provider 00:03:05.948 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:18.153 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:18.153 Creating mk/config.mk...done. 00:03:18.153 Creating mk/cc.flags.mk...done. 00:03:18.153 Type 'make' to build. 00:03:18.153 10:26:42 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:18.154 10:26:42 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:18.154 10:26:42 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:18.154 10:26:42 -- common/autotest_common.sh@10 -- $ set +x 00:03:18.154 ************************************ 00:03:18.154 START TEST make 00:03:18.154 ************************************ 00:03:18.154 10:26:42 make -- common/autotest_common.sh@1127 -- $ make -j10 00:03:18.154 make[1]: Nothing to be done for 'all'. 00:03:30.360 The Meson build system 00:03:30.360 Version: 1.5.0 00:03:30.360 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:30.360 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:30.360 Build type: native build 00:03:30.360 Program cat found: YES (/usr/bin/cat) 00:03:30.360 Project name: DPDK 00:03:30.360 Project version: 24.03.0 00:03:30.360 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:30.360 C linker for the host machine: cc ld.bfd 2.40-14 00:03:30.360 Host machine cpu family: x86_64 00:03:30.360 Host machine cpu: x86_64 00:03:30.360 Message: ## Building in Developer Mode ## 00:03:30.360 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:30.360 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:30.360 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:30.360 Program python3 found: YES (/usr/bin/python3) 00:03:30.360 Program cat found: YES (/usr/bin/cat) 00:03:30.360 Compiler for C supports arguments -march=native: YES 00:03:30.360 Checking for size of "void *" : 8 00:03:30.360 Checking for size of "void *" : 8 (cached) 00:03:30.360 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:30.360 Library m found: YES 00:03:30.360 Library numa found: YES 00:03:30.360 Has header "numaif.h" : YES 00:03:30.360 Library fdt found: NO 00:03:30.360 Library execinfo found: NO 00:03:30.360 Has header "execinfo.h" : YES 00:03:30.360 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:30.360 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:30.360 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:30.360 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:30.360 Run-time dependency openssl found: YES 3.1.1 00:03:30.360 Run-time dependency libpcap found: YES 1.10.4 00:03:30.360 Has header "pcap.h" with dependency libpcap: YES 00:03:30.360 Compiler for C supports arguments -Wcast-qual: YES 00:03:30.360 Compiler for C supports arguments -Wdeprecated: YES 00:03:30.360 Compiler for C supports arguments -Wformat: YES 00:03:30.360 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:30.360 Compiler for C supports arguments -Wformat-security: NO 00:03:30.360 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:30.360 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:30.360 Compiler for C supports arguments -Wnested-externs: YES 00:03:30.360 Compiler for C supports arguments -Wold-style-definition: YES 00:03:30.360 Compiler for C supports arguments -Wpointer-arith: YES 00:03:30.360 Compiler for C supports arguments -Wsign-compare: YES 00:03:30.360 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:30.360 Compiler for C supports arguments -Wundef: YES 00:03:30.360 Compiler for C supports arguments -Wwrite-strings: YES 00:03:30.360 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:30.360 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:30.360 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:30.360 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:30.360 Program objdump found: YES (/usr/bin/objdump) 00:03:30.360 Compiler for C supports arguments -mavx512f: YES 00:03:30.360 Checking if "AVX512 checking" compiles: YES 00:03:30.360 Fetching value of define "__SSE4_2__" : 1 00:03:30.360 Fetching value of define "__AES__" : 1 00:03:30.360 Fetching value of define "__AVX__" : 1 00:03:30.360 Fetching value of define "__AVX2__" : 1 00:03:30.360 Fetching value of define "__AVX512BW__" : (undefined) 00:03:30.360 Fetching value of define "__AVX512CD__" : (undefined) 00:03:30.360 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:30.360 Fetching value of define "__AVX512F__" : (undefined) 00:03:30.360 Fetching value of define "__AVX512VL__" : (undefined) 00:03:30.360 Fetching value of define "__PCLMUL__" : 1 00:03:30.360 Fetching value of define "__RDRND__" : 1 00:03:30.360 Fetching value of define "__RDSEED__" : 1 00:03:30.360 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:30.360 Fetching value of define "__znver1__" : (undefined) 00:03:30.360 Fetching value of define "__znver2__" : (undefined) 00:03:30.360 Fetching value of define "__znver3__" : (undefined) 00:03:30.360 Fetching value of define "__znver4__" : (undefined) 00:03:30.360 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:30.360 Message: lib/log: Defining dependency "log" 00:03:30.360 Message: lib/kvargs: Defining dependency "kvargs" 00:03:30.360 Message: lib/telemetry: Defining dependency "telemetry" 00:03:30.360 Checking for function "getentropy" : NO 00:03:30.360 Message: lib/eal: Defining dependency "eal" 00:03:30.360 Message: lib/ring: Defining dependency "ring" 00:03:30.360 Message: lib/rcu: Defining dependency "rcu" 00:03:30.360 Message: lib/mempool: Defining dependency "mempool" 00:03:30.360 Message: lib/mbuf: Defining dependency "mbuf" 00:03:30.360 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:30.360 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:30.360 Compiler for C supports arguments -mpclmul: YES 00:03:30.360 Compiler for C supports arguments -maes: YES 00:03:30.360 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:30.360 Compiler for C supports arguments -mavx512bw: YES 00:03:30.360 Compiler for C supports arguments -mavx512dq: YES 00:03:30.360 Compiler for C supports arguments -mavx512vl: YES 00:03:30.360 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:30.360 Compiler for C supports arguments -mavx2: YES 00:03:30.360 Compiler for C supports arguments -mavx: YES 00:03:30.360 Message: lib/net: Defining dependency "net" 00:03:30.360 Message: lib/meter: Defining dependency "meter" 00:03:30.360 Message: lib/ethdev: Defining dependency "ethdev" 00:03:30.360 Message: lib/pci: Defining dependency "pci" 00:03:30.360 Message: lib/cmdline: Defining dependency "cmdline" 00:03:30.360 Message: lib/hash: Defining dependency "hash" 00:03:30.360 Message: lib/timer: Defining dependency "timer" 00:03:30.360 Message: lib/compressdev: Defining dependency "compressdev" 00:03:30.360 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:30.360 Message: lib/dmadev: Defining dependency "dmadev" 00:03:30.360 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:30.360 Message: lib/power: Defining dependency "power" 00:03:30.360 Message: lib/reorder: Defining dependency "reorder" 00:03:30.360 Message: lib/security: Defining dependency "security" 00:03:30.360 Has header "linux/userfaultfd.h" : YES 00:03:30.360 Has header "linux/vduse.h" : YES 00:03:30.360 Message: lib/vhost: Defining dependency "vhost" 00:03:30.360 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:30.360 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:30.360 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:30.360 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:30.360 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:30.360 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:30.360 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:30.360 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:30.360 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:30.360 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:30.360 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:30.360 Configuring doxy-api-html.conf using configuration 00:03:30.360 Configuring doxy-api-man.conf using configuration 00:03:30.360 Program mandb found: YES (/usr/bin/mandb) 00:03:30.360 Program sphinx-build found: NO 00:03:30.360 Configuring rte_build_config.h using configuration 00:03:30.360 Message: 00:03:30.360 ================= 00:03:30.360 Applications Enabled 00:03:30.360 ================= 00:03:30.360 00:03:30.360 apps: 00:03:30.360 00:03:30.360 00:03:30.360 Message: 00:03:30.360 ================= 00:03:30.360 Libraries Enabled 00:03:30.360 ================= 00:03:30.360 00:03:30.360 libs: 00:03:30.360 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:30.360 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:30.360 cryptodev, dmadev, power, reorder, security, vhost, 00:03:30.360 00:03:30.360 Message: 00:03:30.360 =============== 00:03:30.360 Drivers Enabled 00:03:30.360 =============== 00:03:30.360 00:03:30.360 common: 00:03:30.360 00:03:30.360 bus: 00:03:30.360 pci, vdev, 00:03:30.360 mempool: 00:03:30.360 ring, 00:03:30.360 dma: 00:03:30.360 00:03:30.360 net: 00:03:30.360 00:03:30.360 crypto: 00:03:30.360 00:03:30.360 compress: 00:03:30.360 00:03:30.360 vdpa: 00:03:30.360 00:03:30.360 00:03:30.360 Message: 00:03:30.360 ================= 00:03:30.360 Content Skipped 00:03:30.360 ================= 00:03:30.360 00:03:30.360 apps: 00:03:30.360 dumpcap: explicitly disabled via build config 00:03:30.360 graph: explicitly disabled via build config 00:03:30.360 pdump: explicitly disabled via build config 00:03:30.360 proc-info: explicitly disabled via build config 00:03:30.360 test-acl: explicitly disabled via build config 00:03:30.360 test-bbdev: explicitly disabled via build config 00:03:30.361 test-cmdline: explicitly disabled via build config 00:03:30.361 test-compress-perf: explicitly disabled via build config 00:03:30.361 test-crypto-perf: explicitly disabled via build config 00:03:30.361 test-dma-perf: explicitly disabled via build config 00:03:30.361 test-eventdev: explicitly disabled via build config 00:03:30.361 test-fib: explicitly disabled via build config 00:03:30.361 test-flow-perf: explicitly disabled via build config 00:03:30.361 test-gpudev: explicitly disabled via build config 00:03:30.361 test-mldev: explicitly disabled via build config 00:03:30.361 test-pipeline: explicitly disabled via build config 00:03:30.361 test-pmd: explicitly disabled via build config 00:03:30.361 test-regex: explicitly disabled via build config 00:03:30.361 test-sad: explicitly disabled via build config 00:03:30.361 test-security-perf: explicitly disabled via build config 00:03:30.361 00:03:30.361 libs: 00:03:30.361 argparse: explicitly disabled via build config 00:03:30.361 metrics: explicitly disabled via build config 00:03:30.361 acl: explicitly disabled via build config 00:03:30.361 bbdev: explicitly disabled via build config 00:03:30.361 bitratestats: explicitly disabled via build config 00:03:30.361 bpf: explicitly disabled via build config 00:03:30.361 cfgfile: explicitly disabled via build config 00:03:30.361 distributor: explicitly disabled via build config 00:03:30.361 efd: explicitly disabled via build config 00:03:30.361 eventdev: explicitly disabled via build config 00:03:30.361 dispatcher: explicitly disabled via build config 00:03:30.361 gpudev: explicitly disabled via build config 00:03:30.361 gro: explicitly disabled via build config 00:03:30.361 gso: explicitly disabled via build config 00:03:30.361 ip_frag: explicitly disabled via build config 00:03:30.361 jobstats: explicitly disabled via build config 00:03:30.361 latencystats: explicitly disabled via build config 00:03:30.361 lpm: explicitly disabled via build config 00:03:30.361 member: explicitly disabled via build config 00:03:30.361 pcapng: explicitly disabled via build config 00:03:30.361 rawdev: explicitly disabled via build config 00:03:30.361 regexdev: explicitly disabled via build config 00:03:30.361 mldev: explicitly disabled via build config 00:03:30.361 rib: explicitly disabled via build config 00:03:30.361 sched: explicitly disabled via build config 00:03:30.361 stack: explicitly disabled via build config 00:03:30.361 ipsec: explicitly disabled via build config 00:03:30.361 pdcp: explicitly disabled via build config 00:03:30.361 fib: explicitly disabled via build config 00:03:30.361 port: explicitly disabled via build config 00:03:30.361 pdump: explicitly disabled via build config 00:03:30.361 table: explicitly disabled via build config 00:03:30.361 pipeline: explicitly disabled via build config 00:03:30.361 graph: explicitly disabled via build config 00:03:30.361 node: explicitly disabled via build config 00:03:30.361 00:03:30.361 drivers: 00:03:30.361 common/cpt: not in enabled drivers build config 00:03:30.361 common/dpaax: not in enabled drivers build config 00:03:30.361 common/iavf: not in enabled drivers build config 00:03:30.361 common/idpf: not in enabled drivers build config 00:03:30.361 common/ionic: not in enabled drivers build config 00:03:30.361 common/mvep: not in enabled drivers build config 00:03:30.361 common/octeontx: not in enabled drivers build config 00:03:30.361 bus/auxiliary: not in enabled drivers build config 00:03:30.361 bus/cdx: not in enabled drivers build config 00:03:30.361 bus/dpaa: not in enabled drivers build config 00:03:30.361 bus/fslmc: not in enabled drivers build config 00:03:30.361 bus/ifpga: not in enabled drivers build config 00:03:30.361 bus/platform: not in enabled drivers build config 00:03:30.361 bus/uacce: not in enabled drivers build config 00:03:30.361 bus/vmbus: not in enabled drivers build config 00:03:30.361 common/cnxk: not in enabled drivers build config 00:03:30.361 common/mlx5: not in enabled drivers build config 00:03:30.361 common/nfp: not in enabled drivers build config 00:03:30.361 common/nitrox: not in enabled drivers build config 00:03:30.361 common/qat: not in enabled drivers build config 00:03:30.361 common/sfc_efx: not in enabled drivers build config 00:03:30.361 mempool/bucket: not in enabled drivers build config 00:03:30.361 mempool/cnxk: not in enabled drivers build config 00:03:30.361 mempool/dpaa: not in enabled drivers build config 00:03:30.361 mempool/dpaa2: not in enabled drivers build config 00:03:30.361 mempool/octeontx: not in enabled drivers build config 00:03:30.361 mempool/stack: not in enabled drivers build config 00:03:30.361 dma/cnxk: not in enabled drivers build config 00:03:30.361 dma/dpaa: not in enabled drivers build config 00:03:30.361 dma/dpaa2: not in enabled drivers build config 00:03:30.361 dma/hisilicon: not in enabled drivers build config 00:03:30.361 dma/idxd: not in enabled drivers build config 00:03:30.361 dma/ioat: not in enabled drivers build config 00:03:30.361 dma/skeleton: not in enabled drivers build config 00:03:30.361 net/af_packet: not in enabled drivers build config 00:03:30.361 net/af_xdp: not in enabled drivers build config 00:03:30.361 net/ark: not in enabled drivers build config 00:03:30.361 net/atlantic: not in enabled drivers build config 00:03:30.361 net/avp: not in enabled drivers build config 00:03:30.361 net/axgbe: not in enabled drivers build config 00:03:30.361 net/bnx2x: not in enabled drivers build config 00:03:30.361 net/bnxt: not in enabled drivers build config 00:03:30.361 net/bonding: not in enabled drivers build config 00:03:30.361 net/cnxk: not in enabled drivers build config 00:03:30.361 net/cpfl: not in enabled drivers build config 00:03:30.361 net/cxgbe: not in enabled drivers build config 00:03:30.361 net/dpaa: not in enabled drivers build config 00:03:30.361 net/dpaa2: not in enabled drivers build config 00:03:30.361 net/e1000: not in enabled drivers build config 00:03:30.361 net/ena: not in enabled drivers build config 00:03:30.361 net/enetc: not in enabled drivers build config 00:03:30.361 net/enetfec: not in enabled drivers build config 00:03:30.361 net/enic: not in enabled drivers build config 00:03:30.361 net/failsafe: not in enabled drivers build config 00:03:30.361 net/fm10k: not in enabled drivers build config 00:03:30.361 net/gve: not in enabled drivers build config 00:03:30.361 net/hinic: not in enabled drivers build config 00:03:30.361 net/hns3: not in enabled drivers build config 00:03:30.361 net/i40e: not in enabled drivers build config 00:03:30.361 net/iavf: not in enabled drivers build config 00:03:30.361 net/ice: not in enabled drivers build config 00:03:30.361 net/idpf: not in enabled drivers build config 00:03:30.361 net/igc: not in enabled drivers build config 00:03:30.361 net/ionic: not in enabled drivers build config 00:03:30.361 net/ipn3ke: not in enabled drivers build config 00:03:30.361 net/ixgbe: not in enabled drivers build config 00:03:30.361 net/mana: not in enabled drivers build config 00:03:30.361 net/memif: not in enabled drivers build config 00:03:30.361 net/mlx4: not in enabled drivers build config 00:03:30.361 net/mlx5: not in enabled drivers build config 00:03:30.361 net/mvneta: not in enabled drivers build config 00:03:30.361 net/mvpp2: not in enabled drivers build config 00:03:30.361 net/netvsc: not in enabled drivers build config 00:03:30.361 net/nfb: not in enabled drivers build config 00:03:30.361 net/nfp: not in enabled drivers build config 00:03:30.361 net/ngbe: not in enabled drivers build config 00:03:30.361 net/null: not in enabled drivers build config 00:03:30.361 net/octeontx: not in enabled drivers build config 00:03:30.361 net/octeon_ep: not in enabled drivers build config 00:03:30.361 net/pcap: not in enabled drivers build config 00:03:30.361 net/pfe: not in enabled drivers build config 00:03:30.361 net/qede: not in enabled drivers build config 00:03:30.361 net/ring: not in enabled drivers build config 00:03:30.361 net/sfc: not in enabled drivers build config 00:03:30.361 net/softnic: not in enabled drivers build config 00:03:30.361 net/tap: not in enabled drivers build config 00:03:30.361 net/thunderx: not in enabled drivers build config 00:03:30.361 net/txgbe: not in enabled drivers build config 00:03:30.361 net/vdev_netvsc: not in enabled drivers build config 00:03:30.361 net/vhost: not in enabled drivers build config 00:03:30.361 net/virtio: not in enabled drivers build config 00:03:30.361 net/vmxnet3: not in enabled drivers build config 00:03:30.361 raw/*: missing internal dependency, "rawdev" 00:03:30.361 crypto/armv8: not in enabled drivers build config 00:03:30.361 crypto/bcmfs: not in enabled drivers build config 00:03:30.361 crypto/caam_jr: not in enabled drivers build config 00:03:30.361 crypto/ccp: not in enabled drivers build config 00:03:30.361 crypto/cnxk: not in enabled drivers build config 00:03:30.361 crypto/dpaa_sec: not in enabled drivers build config 00:03:30.361 crypto/dpaa2_sec: not in enabled drivers build config 00:03:30.361 crypto/ipsec_mb: not in enabled drivers build config 00:03:30.361 crypto/mlx5: not in enabled drivers build config 00:03:30.361 crypto/mvsam: not in enabled drivers build config 00:03:30.361 crypto/nitrox: not in enabled drivers build config 00:03:30.361 crypto/null: not in enabled drivers build config 00:03:30.361 crypto/octeontx: not in enabled drivers build config 00:03:30.361 crypto/openssl: not in enabled drivers build config 00:03:30.361 crypto/scheduler: not in enabled drivers build config 00:03:30.361 crypto/uadk: not in enabled drivers build config 00:03:30.361 crypto/virtio: not in enabled drivers build config 00:03:30.361 compress/isal: not in enabled drivers build config 00:03:30.361 compress/mlx5: not in enabled drivers build config 00:03:30.361 compress/nitrox: not in enabled drivers build config 00:03:30.361 compress/octeontx: not in enabled drivers build config 00:03:30.361 compress/zlib: not in enabled drivers build config 00:03:30.361 regex/*: missing internal dependency, "regexdev" 00:03:30.361 ml/*: missing internal dependency, "mldev" 00:03:30.361 vdpa/ifc: not in enabled drivers build config 00:03:30.361 vdpa/mlx5: not in enabled drivers build config 00:03:30.361 vdpa/nfp: not in enabled drivers build config 00:03:30.361 vdpa/sfc: not in enabled drivers build config 00:03:30.361 event/*: missing internal dependency, "eventdev" 00:03:30.361 baseband/*: missing internal dependency, "bbdev" 00:03:30.361 gpu/*: missing internal dependency, "gpudev" 00:03:30.361 00:03:30.361 00:03:30.361 Build targets in project: 85 00:03:30.361 00:03:30.361 DPDK 24.03.0 00:03:30.361 00:03:30.361 User defined options 00:03:30.361 buildtype : debug 00:03:30.361 default_library : shared 00:03:30.361 libdir : lib 00:03:30.361 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:30.361 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:30.362 c_link_args : 00:03:30.362 cpu_instruction_set: native 00:03:30.362 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:30.362 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:30.362 enable_docs : false 00:03:30.362 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:30.362 enable_kmods : false 00:03:30.362 max_lcores : 128 00:03:30.362 tests : false 00:03:30.362 00:03:30.362 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:30.362 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:30.362 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:30.362 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:30.362 [3/268] Linking static target lib/librte_kvargs.a 00:03:30.362 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:30.362 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:30.362 [6/268] Linking static target lib/librte_log.a 00:03:30.620 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.620 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:30.878 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:30.878 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:30.878 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:31.136 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:31.137 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:31.137 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:31.137 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:31.137 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:31.137 [17/268] Linking static target lib/librte_telemetry.a 00:03:31.137 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.137 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:31.137 [20/268] Linking target lib/librte_log.so.24.1 00:03:31.394 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:31.653 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:31.653 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:31.911 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:31.911 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:31.911 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:31.911 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:31.911 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:31.911 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:31.911 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:31.911 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.911 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:32.198 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:32.198 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:32.198 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:32.198 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:32.456 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:32.715 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:32.715 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:32.715 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:32.715 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:32.715 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:32.715 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:32.974 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:32.974 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:32.974 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:32.974 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:33.232 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:33.232 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:33.233 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:33.233 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:33.491 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:33.750 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:33.750 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:34.009 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:34.009 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:34.009 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:34.009 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:34.009 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:34.009 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:34.268 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:34.268 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:34.268 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:34.527 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:34.785 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:34.785 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:35.042 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:35.042 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:35.042 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:35.042 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:35.042 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:35.042 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:35.300 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:35.300 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:35.300 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:35.300 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:35.558 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:35.817 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:35.817 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:35.817 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:35.817 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:35.817 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:35.817 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:36.076 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:36.076 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:36.076 [86/268] Linking static target lib/librte_ring.a 00:03:36.076 [87/268] Linking static target lib/librte_eal.a 00:03:36.334 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:36.334 [89/268] Linking static target lib/librte_rcu.a 00:03:36.334 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:36.334 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:36.334 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:36.593 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.593 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:36.593 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:36.852 [96/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.852 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:36.852 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:36.852 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:36.852 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:36.852 [101/268] Linking static target lib/librte_mempool.a 00:03:36.852 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:37.111 [103/268] Linking static target lib/librte_mbuf.a 00:03:37.111 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:37.370 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:37.370 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:37.370 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:37.370 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:37.370 [109/268] Linking static target lib/librte_meter.a 00:03:37.628 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:37.628 [111/268] Linking static target lib/librte_net.a 00:03:37.628 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:37.887 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.887 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:38.144 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:38.144 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.144 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.144 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:38.144 [119/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.402 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:38.661 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:38.661 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:38.920 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:39.179 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:39.179 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:39.179 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:39.179 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:39.179 [128/268] Linking static target lib/librte_pci.a 00:03:39.179 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:39.179 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:39.437 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:39.437 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:39.437 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:39.437 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:39.437 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:39.437 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:39.437 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:39.695 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:39.695 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.695 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:39.695 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:39.695 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:39.695 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:39.695 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:39.954 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:39.954 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:39.954 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:39.954 [148/268] Linking static target lib/librte_ethdev.a 00:03:40.212 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:40.212 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:40.212 [151/268] Linking static target lib/librte_cmdline.a 00:03:40.470 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:40.470 [153/268] Linking static target lib/librte_timer.a 00:03:40.470 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:40.470 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:40.470 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:40.470 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:40.470 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:40.728 [159/268] Linking static target lib/librte_hash.a 00:03:40.986 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:40.986 [161/268] Linking static target lib/librte_compressdev.a 00:03:41.245 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:41.245 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.245 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:41.245 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:41.504 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:41.504 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:41.762 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:41.762 [169/268] Linking static target lib/librte_dmadev.a 00:03:41.762 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:41.762 [171/268] Linking static target lib/librte_cryptodev.a 00:03:41.762 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.762 [173/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.020 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:42.020 [175/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:42.020 [176/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:42.020 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.278 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:42.536 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:42.536 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:42.536 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:42.536 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:42.536 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:42.794 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.794 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:42.794 [186/268] Linking static target lib/librte_power.a 00:03:42.794 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:43.053 [188/268] Linking static target lib/librte_reorder.a 00:03:43.312 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:43.312 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:43.312 [191/268] Linking static target lib/librte_security.a 00:03:43.312 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:43.570 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:43.570 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.828 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:44.087 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.087 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.087 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:44.087 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:44.345 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:44.345 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.345 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:44.604 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:44.862 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:44.862 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:45.121 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:45.121 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:45.121 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:45.121 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:45.121 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:45.121 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:45.121 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:45.379 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:45.379 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:45.379 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:45.379 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:45.379 [217/268] Linking static target drivers/librte_bus_vdev.a 00:03:45.379 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:45.379 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:45.379 [220/268] Linking static target drivers/librte_bus_pci.a 00:03:45.379 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:45.379 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:45.638 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:45.638 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:45.638 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:45.638 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:45.638 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.896 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.462 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:46.721 [230/268] Linking static target lib/librte_vhost.a 00:03:47.287 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.546 [232/268] Linking target lib/librte_eal.so.24.1 00:03:47.804 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:47.804 [234/268] Linking target lib/librte_pci.so.24.1 00:03:47.804 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:47.804 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:47.804 [237/268] Linking target lib/librte_ring.so.24.1 00:03:47.804 [238/268] Linking target lib/librte_meter.so.24.1 00:03:47.804 [239/268] Linking target lib/librte_timer.so.24.1 00:03:47.804 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:47.804 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:47.804 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:47.804 [243/268] Linking target lib/librte_mempool.so.24.1 00:03:47.804 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:47.804 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:47.804 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:47.804 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:48.063 [248/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.063 [249/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.063 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:48.063 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:48.063 [252/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:48.063 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:48.321 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:48.321 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:48.321 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:03:48.321 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:48.321 [258/268] Linking target lib/librte_net.so.24.1 00:03:48.321 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:48.321 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:48.580 [261/268] Linking target lib/librte_security.so.24.1 00:03:48.580 [262/268] Linking target lib/librte_hash.so.24.1 00:03:48.580 [263/268] Linking target lib/librte_cmdline.so.24.1 00:03:48.580 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:48.580 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:48.580 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:48.838 [267/268] Linking target lib/librte_power.so.24.1 00:03:48.838 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:48.838 INFO: autodetecting backend as ninja 00:03:48.838 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:15.378 CC lib/ut/ut.o 00:04:15.378 CC lib/log/log.o 00:04:15.378 CC lib/log/log_flags.o 00:04:15.378 CC lib/log/log_deprecated.o 00:04:15.378 CC lib/ut_mock/mock.o 00:04:15.378 LIB libspdk_ut_mock.a 00:04:15.378 LIB libspdk_log.a 00:04:15.378 LIB libspdk_ut.a 00:04:15.378 SO libspdk_ut_mock.so.6.0 00:04:15.378 SO libspdk_log.so.7.1 00:04:15.378 SO libspdk_ut.so.2.0 00:04:15.378 SYMLINK libspdk_ut_mock.so 00:04:15.378 SYMLINK libspdk_log.so 00:04:15.378 SYMLINK libspdk_ut.so 00:04:15.378 CXX lib/trace_parser/trace.o 00:04:15.378 CC lib/util/base64.o 00:04:15.378 CC lib/dma/dma.o 00:04:15.378 CC lib/util/cpuset.o 00:04:15.378 CC lib/util/bit_array.o 00:04:15.378 CC lib/util/crc16.o 00:04:15.378 CC lib/util/crc32.o 00:04:15.378 CC lib/util/crc32c.o 00:04:15.378 CC lib/ioat/ioat.o 00:04:15.378 CC lib/vfio_user/host/vfio_user_pci.o 00:04:15.378 CC lib/vfio_user/host/vfio_user.o 00:04:15.378 CC lib/util/crc32_ieee.o 00:04:15.378 CC lib/util/crc64.o 00:04:15.378 CC lib/util/dif.o 00:04:15.378 LIB libspdk_dma.a 00:04:15.378 SO libspdk_dma.so.5.0 00:04:15.378 CC lib/util/fd.o 00:04:15.378 CC lib/util/fd_group.o 00:04:15.378 CC lib/util/file.o 00:04:15.378 SYMLINK libspdk_dma.so 00:04:15.378 CC lib/util/hexlify.o 00:04:15.378 LIB libspdk_ioat.a 00:04:15.378 CC lib/util/iov.o 00:04:15.378 CC lib/util/math.o 00:04:15.378 SO libspdk_ioat.so.7.0 00:04:15.378 LIB libspdk_vfio_user.a 00:04:15.378 SYMLINK libspdk_ioat.so 00:04:15.378 SO libspdk_vfio_user.so.5.0 00:04:15.378 CC lib/util/net.o 00:04:15.378 CC lib/util/pipe.o 00:04:15.378 CC lib/util/strerror_tls.o 00:04:15.378 CC lib/util/string.o 00:04:15.378 SYMLINK libspdk_vfio_user.so 00:04:15.378 CC lib/util/uuid.o 00:04:15.378 CC lib/util/xor.o 00:04:15.378 CC lib/util/zipf.o 00:04:15.378 CC lib/util/md5.o 00:04:15.378 LIB libspdk_util.a 00:04:15.378 SO libspdk_util.so.10.1 00:04:15.378 LIB libspdk_trace_parser.a 00:04:15.378 SO libspdk_trace_parser.so.6.0 00:04:15.378 SYMLINK libspdk_util.so 00:04:15.378 SYMLINK libspdk_trace_parser.so 00:04:15.378 CC lib/conf/conf.o 00:04:15.378 CC lib/env_dpdk/env.o 00:04:15.378 CC lib/env_dpdk/memory.o 00:04:15.378 CC lib/env_dpdk/pci.o 00:04:15.378 CC lib/env_dpdk/init.o 00:04:15.378 CC lib/vmd/led.o 00:04:15.378 CC lib/vmd/vmd.o 00:04:15.378 CC lib/rdma_utils/rdma_utils.o 00:04:15.378 CC lib/idxd/idxd.o 00:04:15.378 CC lib/json/json_parse.o 00:04:15.378 CC lib/json/json_util.o 00:04:15.378 LIB libspdk_conf.a 00:04:15.378 SO libspdk_conf.so.6.0 00:04:15.378 LIB libspdk_rdma_utils.a 00:04:15.378 CC lib/json/json_write.o 00:04:15.378 SO libspdk_rdma_utils.so.1.0 00:04:15.378 SYMLINK libspdk_conf.so 00:04:15.378 CC lib/idxd/idxd_user.o 00:04:15.378 SYMLINK libspdk_rdma_utils.so 00:04:15.378 CC lib/env_dpdk/threads.o 00:04:15.378 CC lib/env_dpdk/pci_ioat.o 00:04:15.378 CC lib/env_dpdk/pci_virtio.o 00:04:15.378 CC lib/env_dpdk/pci_vmd.o 00:04:15.378 CC lib/env_dpdk/pci_idxd.o 00:04:15.378 CC lib/env_dpdk/pci_event.o 00:04:15.378 CC lib/idxd/idxd_kernel.o 00:04:15.378 CC lib/rdma_provider/common.o 00:04:15.378 LIB libspdk_json.a 00:04:15.378 CC lib/env_dpdk/sigbus_handler.o 00:04:15.378 SO libspdk_json.so.6.0 00:04:15.378 CC lib/env_dpdk/pci_dpdk.o 00:04:15.378 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:15.378 LIB libspdk_vmd.a 00:04:15.378 SYMLINK libspdk_json.so 00:04:15.378 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:15.378 SO libspdk_vmd.so.6.0 00:04:15.378 LIB libspdk_idxd.a 00:04:15.378 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:15.636 SO libspdk_idxd.so.12.1 00:04:15.636 SYMLINK libspdk_vmd.so 00:04:15.636 SYMLINK libspdk_idxd.so 00:04:15.636 CC lib/jsonrpc/jsonrpc_server.o 00:04:15.636 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:15.636 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:15.636 CC lib/jsonrpc/jsonrpc_client.o 00:04:15.636 LIB libspdk_rdma_provider.a 00:04:15.636 SO libspdk_rdma_provider.so.7.0 00:04:15.636 SYMLINK libspdk_rdma_provider.so 00:04:15.894 LIB libspdk_jsonrpc.a 00:04:15.894 SO libspdk_jsonrpc.so.6.0 00:04:15.894 SYMLINK libspdk_jsonrpc.so 00:04:16.152 LIB libspdk_env_dpdk.a 00:04:16.152 CC lib/rpc/rpc.o 00:04:16.410 SO libspdk_env_dpdk.so.15.1 00:04:16.410 SYMLINK libspdk_env_dpdk.so 00:04:16.410 LIB libspdk_rpc.a 00:04:16.410 SO libspdk_rpc.so.6.0 00:04:16.669 SYMLINK libspdk_rpc.so 00:04:16.669 CC lib/keyring/keyring.o 00:04:16.669 CC lib/notify/notify.o 00:04:16.669 CC lib/keyring/keyring_rpc.o 00:04:16.669 CC lib/notify/notify_rpc.o 00:04:16.669 CC lib/trace/trace_flags.o 00:04:16.669 CC lib/trace/trace.o 00:04:16.669 CC lib/trace/trace_rpc.o 00:04:16.927 LIB libspdk_notify.a 00:04:16.927 SO libspdk_notify.so.6.0 00:04:16.927 SYMLINK libspdk_notify.so 00:04:17.186 LIB libspdk_keyring.a 00:04:17.186 LIB libspdk_trace.a 00:04:17.186 SO libspdk_keyring.so.2.0 00:04:17.186 SO libspdk_trace.so.11.0 00:04:17.186 SYMLINK libspdk_keyring.so 00:04:17.186 SYMLINK libspdk_trace.so 00:04:17.446 CC lib/thread/thread.o 00:04:17.446 CC lib/sock/sock.o 00:04:17.446 CC lib/thread/iobuf.o 00:04:17.446 CC lib/sock/sock_rpc.o 00:04:18.011 LIB libspdk_sock.a 00:04:18.011 SO libspdk_sock.so.10.0 00:04:18.011 SYMLINK libspdk_sock.so 00:04:18.269 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:18.269 CC lib/nvme/nvme_ctrlr.o 00:04:18.269 CC lib/nvme/nvme_fabric.o 00:04:18.269 CC lib/nvme/nvme_ns_cmd.o 00:04:18.269 CC lib/nvme/nvme_pcie_common.o 00:04:18.269 CC lib/nvme/nvme_ns.o 00:04:18.269 CC lib/nvme/nvme_pcie.o 00:04:18.269 CC lib/nvme/nvme_qpair.o 00:04:18.269 CC lib/nvme/nvme.o 00:04:19.294 LIB libspdk_thread.a 00:04:19.294 SO libspdk_thread.so.11.0 00:04:19.294 CC lib/nvme/nvme_quirks.o 00:04:19.294 CC lib/nvme/nvme_transport.o 00:04:19.294 SYMLINK libspdk_thread.so 00:04:19.294 CC lib/nvme/nvme_discovery.o 00:04:19.294 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:19.294 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:19.294 CC lib/nvme/nvme_tcp.o 00:04:19.294 CC lib/nvme/nvme_opal.o 00:04:19.552 CC lib/accel/accel.o 00:04:19.552 CC lib/accel/accel_rpc.o 00:04:19.809 CC lib/nvme/nvme_io_msg.o 00:04:19.809 CC lib/accel/accel_sw.o 00:04:19.809 CC lib/nvme/nvme_poll_group.o 00:04:19.809 CC lib/nvme/nvme_zns.o 00:04:20.068 CC lib/nvme/nvme_stubs.o 00:04:20.068 CC lib/nvme/nvme_auth.o 00:04:20.068 CC lib/nvme/nvme_cuse.o 00:04:20.068 CC lib/nvme/nvme_rdma.o 00:04:20.634 LIB libspdk_accel.a 00:04:20.634 SO libspdk_accel.so.16.0 00:04:20.634 CC lib/blob/blobstore.o 00:04:20.634 SYMLINK libspdk_accel.so 00:04:20.634 CC lib/blob/request.o 00:04:20.634 CC lib/init/json_config.o 00:04:20.892 CC lib/virtio/virtio.o 00:04:20.892 CC lib/fsdev/fsdev.o 00:04:20.892 CC lib/bdev/bdev.o 00:04:20.892 CC lib/fsdev/fsdev_io.o 00:04:20.892 CC lib/blob/zeroes.o 00:04:20.892 CC lib/init/subsystem.o 00:04:20.892 CC lib/init/subsystem_rpc.o 00:04:21.150 CC lib/init/rpc.o 00:04:21.150 CC lib/virtio/virtio_vhost_user.o 00:04:21.150 CC lib/virtio/virtio_vfio_user.o 00:04:21.150 CC lib/virtio/virtio_pci.o 00:04:21.150 CC lib/fsdev/fsdev_rpc.o 00:04:21.150 LIB libspdk_init.a 00:04:21.150 SO libspdk_init.so.6.0 00:04:21.150 CC lib/blob/blob_bs_dev.o 00:04:21.408 CC lib/bdev/bdev_rpc.o 00:04:21.408 SYMLINK libspdk_init.so 00:04:21.408 CC lib/bdev/bdev_zone.o 00:04:21.408 CC lib/bdev/part.o 00:04:21.408 LIB libspdk_virtio.a 00:04:21.408 LIB libspdk_fsdev.a 00:04:21.408 SO libspdk_fsdev.so.2.0 00:04:21.408 SO libspdk_virtio.so.7.0 00:04:21.408 CC lib/event/app.o 00:04:21.408 CC lib/event/reactor.o 00:04:21.666 CC lib/bdev/scsi_nvme.o 00:04:21.666 SYMLINK libspdk_fsdev.so 00:04:21.666 SYMLINK libspdk_virtio.so 00:04:21.666 CC lib/event/log_rpc.o 00:04:21.666 LIB libspdk_nvme.a 00:04:21.666 CC lib/event/app_rpc.o 00:04:21.666 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:21.666 CC lib/event/scheduler_static.o 00:04:21.666 SO libspdk_nvme.so.15.0 00:04:21.924 LIB libspdk_event.a 00:04:21.924 SO libspdk_event.so.14.0 00:04:21.924 SYMLINK libspdk_nvme.so 00:04:22.183 SYMLINK libspdk_event.so 00:04:22.441 LIB libspdk_fuse_dispatcher.a 00:04:22.441 SO libspdk_fuse_dispatcher.so.1.0 00:04:22.441 SYMLINK libspdk_fuse_dispatcher.so 00:04:23.816 LIB libspdk_blob.a 00:04:23.816 LIB libspdk_bdev.a 00:04:23.816 SO libspdk_blob.so.11.0 00:04:23.816 SO libspdk_bdev.so.17.0 00:04:23.816 SYMLINK libspdk_blob.so 00:04:24.074 SYMLINK libspdk_bdev.so 00:04:24.074 CC lib/lvol/lvol.o 00:04:24.074 CC lib/blobfs/blobfs.o 00:04:24.074 CC lib/blobfs/tree.o 00:04:24.074 CC lib/nbd/nbd_rpc.o 00:04:24.074 CC lib/ublk/ublk.o 00:04:24.074 CC lib/nbd/nbd.o 00:04:24.074 CC lib/ublk/ublk_rpc.o 00:04:24.074 CC lib/scsi/dev.o 00:04:24.074 CC lib/ftl/ftl_core.o 00:04:24.074 CC lib/nvmf/ctrlr.o 00:04:24.331 CC lib/nvmf/ctrlr_discovery.o 00:04:24.331 CC lib/nvmf/ctrlr_bdev.o 00:04:24.331 CC lib/ftl/ftl_init.o 00:04:24.331 CC lib/scsi/lun.o 00:04:24.590 CC lib/ftl/ftl_layout.o 00:04:24.590 LIB libspdk_nbd.a 00:04:24.590 SO libspdk_nbd.so.7.0 00:04:24.590 CC lib/scsi/port.o 00:04:24.590 SYMLINK libspdk_nbd.so 00:04:24.590 CC lib/nvmf/subsystem.o 00:04:24.848 CC lib/scsi/scsi.o 00:04:24.848 LIB libspdk_ublk.a 00:04:24.848 CC lib/nvmf/nvmf.o 00:04:24.848 CC lib/ftl/ftl_debug.o 00:04:24.848 SO libspdk_ublk.so.3.0 00:04:24.848 SYMLINK libspdk_ublk.so 00:04:24.848 CC lib/nvmf/nvmf_rpc.o 00:04:24.848 CC lib/ftl/ftl_io.o 00:04:24.848 CC lib/scsi/scsi_bdev.o 00:04:25.107 LIB libspdk_blobfs.a 00:04:25.107 CC lib/nvmf/transport.o 00:04:25.107 SO libspdk_blobfs.so.10.0 00:04:25.107 CC lib/ftl/ftl_sb.o 00:04:25.107 SYMLINK libspdk_blobfs.so 00:04:25.107 CC lib/ftl/ftl_l2p.o 00:04:25.107 LIB libspdk_lvol.a 00:04:25.107 CC lib/scsi/scsi_pr.o 00:04:25.107 SO libspdk_lvol.so.10.0 00:04:25.107 SYMLINK libspdk_lvol.so 00:04:25.107 CC lib/scsi/scsi_rpc.o 00:04:25.365 CC lib/scsi/task.o 00:04:25.365 CC lib/ftl/ftl_l2p_flat.o 00:04:25.365 CC lib/ftl/ftl_nv_cache.o 00:04:25.365 CC lib/ftl/ftl_band.o 00:04:25.365 CC lib/nvmf/tcp.o 00:04:25.624 LIB libspdk_scsi.a 00:04:25.624 CC lib/nvmf/stubs.o 00:04:25.624 SO libspdk_scsi.so.9.0 00:04:25.624 SYMLINK libspdk_scsi.so 00:04:25.624 CC lib/nvmf/mdns_server.o 00:04:25.624 CC lib/nvmf/rdma.o 00:04:25.883 CC lib/ftl/ftl_band_ops.o 00:04:25.883 CC lib/iscsi/conn.o 00:04:25.883 CC lib/vhost/vhost.o 00:04:25.883 CC lib/nvmf/auth.o 00:04:25.883 CC lib/ftl/ftl_writer.o 00:04:25.883 CC lib/iscsi/init_grp.o 00:04:26.141 CC lib/vhost/vhost_rpc.o 00:04:26.141 CC lib/ftl/ftl_rq.o 00:04:26.141 CC lib/ftl/ftl_reloc.o 00:04:26.400 CC lib/iscsi/iscsi.o 00:04:26.400 CC lib/iscsi/param.o 00:04:26.400 CC lib/iscsi/portal_grp.o 00:04:26.400 CC lib/vhost/vhost_scsi.o 00:04:26.658 CC lib/ftl/ftl_l2p_cache.o 00:04:26.658 CC lib/ftl/ftl_p2l.o 00:04:26.658 CC lib/iscsi/tgt_node.o 00:04:26.658 CC lib/iscsi/iscsi_subsystem.o 00:04:26.917 CC lib/ftl/ftl_p2l_log.o 00:04:26.917 CC lib/ftl/mngt/ftl_mngt.o 00:04:27.175 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:27.175 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:27.175 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:27.175 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:27.175 CC lib/vhost/vhost_blk.o 00:04:27.175 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:27.175 CC lib/vhost/rte_vhost_user.o 00:04:27.175 CC lib/iscsi/iscsi_rpc.o 00:04:27.175 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:27.434 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:27.434 CC lib/iscsi/task.o 00:04:27.434 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:27.434 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:27.434 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:27.434 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:27.693 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:27.693 CC lib/ftl/utils/ftl_conf.o 00:04:27.693 LIB libspdk_iscsi.a 00:04:27.693 CC lib/ftl/utils/ftl_md.o 00:04:27.693 CC lib/ftl/utils/ftl_mempool.o 00:04:27.693 LIB libspdk_nvmf.a 00:04:27.693 SO libspdk_iscsi.so.8.0 00:04:27.950 CC lib/ftl/utils/ftl_bitmap.o 00:04:27.950 SYMLINK libspdk_iscsi.so 00:04:27.950 SO libspdk_nvmf.so.20.0 00:04:27.950 CC lib/ftl/utils/ftl_property.o 00:04:27.950 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:27.950 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:27.950 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:27.950 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:28.209 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:28.209 SYMLINK libspdk_nvmf.so 00:04:28.209 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:28.209 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:28.209 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:28.209 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:28.209 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:28.209 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:28.209 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:28.209 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:28.466 CC lib/ftl/base/ftl_base_dev.o 00:04:28.466 LIB libspdk_vhost.a 00:04:28.466 CC lib/ftl/base/ftl_base_bdev.o 00:04:28.466 CC lib/ftl/ftl_trace.o 00:04:28.466 SO libspdk_vhost.so.8.0 00:04:28.466 SYMLINK libspdk_vhost.so 00:04:28.724 LIB libspdk_ftl.a 00:04:28.988 SO libspdk_ftl.so.9.0 00:04:29.247 SYMLINK libspdk_ftl.so 00:04:29.504 CC module/env_dpdk/env_dpdk_rpc.o 00:04:29.504 CC module/accel/error/accel_error.o 00:04:29.504 CC module/sock/posix/posix.o 00:04:29.504 CC module/fsdev/aio/fsdev_aio.o 00:04:29.763 CC module/accel/ioat/accel_ioat.o 00:04:29.763 CC module/accel/dsa/accel_dsa.o 00:04:29.763 CC module/accel/iaa/accel_iaa.o 00:04:29.763 CC module/keyring/file/keyring.o 00:04:29.763 CC module/blob/bdev/blob_bdev.o 00:04:29.763 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:29.763 LIB libspdk_env_dpdk_rpc.a 00:04:29.763 SO libspdk_env_dpdk_rpc.so.6.0 00:04:29.763 SYMLINK libspdk_env_dpdk_rpc.so 00:04:29.763 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:29.763 CC module/keyring/file/keyring_rpc.o 00:04:29.763 CC module/accel/ioat/accel_ioat_rpc.o 00:04:29.763 CC module/accel/error/accel_error_rpc.o 00:04:29.763 CC module/accel/iaa/accel_iaa_rpc.o 00:04:29.763 LIB libspdk_scheduler_dynamic.a 00:04:30.021 SO libspdk_scheduler_dynamic.so.4.0 00:04:30.021 LIB libspdk_blob_bdev.a 00:04:30.021 CC module/accel/dsa/accel_dsa_rpc.o 00:04:30.021 CC module/fsdev/aio/linux_aio_mgr.o 00:04:30.021 LIB libspdk_keyring_file.a 00:04:30.021 SYMLINK libspdk_scheduler_dynamic.so 00:04:30.021 SO libspdk_blob_bdev.so.11.0 00:04:30.021 SO libspdk_keyring_file.so.2.0 00:04:30.021 LIB libspdk_accel_error.a 00:04:30.021 LIB libspdk_accel_iaa.a 00:04:30.021 SYMLINK libspdk_blob_bdev.so 00:04:30.021 SO libspdk_accel_error.so.2.0 00:04:30.021 SYMLINK libspdk_keyring_file.so 00:04:30.021 SO libspdk_accel_iaa.so.3.0 00:04:30.021 LIB libspdk_accel_ioat.a 00:04:30.021 LIB libspdk_accel_dsa.a 00:04:30.021 SO libspdk_accel_ioat.so.6.0 00:04:30.021 SYMLINK libspdk_accel_error.so 00:04:30.021 SYMLINK libspdk_accel_iaa.so 00:04:30.021 SO libspdk_accel_dsa.so.5.0 00:04:30.021 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:30.280 SYMLINK libspdk_accel_ioat.so 00:04:30.280 SYMLINK libspdk_accel_dsa.so 00:04:30.280 CC module/keyring/linux/keyring.o 00:04:30.280 LIB libspdk_fsdev_aio.a 00:04:30.280 LIB libspdk_scheduler_dpdk_governor.a 00:04:30.280 CC module/bdev/delay/vbdev_delay.o 00:04:30.280 CC module/sock/uring/uring.o 00:04:30.280 CC module/bdev/error/vbdev_error.o 00:04:30.280 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:30.280 SO libspdk_fsdev_aio.so.1.0 00:04:30.280 CC module/bdev/gpt/gpt.o 00:04:30.280 LIB libspdk_sock_posix.a 00:04:30.280 CC module/keyring/linux/keyring_rpc.o 00:04:30.538 SO libspdk_sock_posix.so.6.0 00:04:30.538 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:30.538 CC module/scheduler/gscheduler/gscheduler.o 00:04:30.538 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:30.538 SYMLINK libspdk_fsdev_aio.so 00:04:30.538 CC module/bdev/gpt/vbdev_gpt.o 00:04:30.538 CC module/blobfs/bdev/blobfs_bdev.o 00:04:30.538 SYMLINK libspdk_sock_posix.so 00:04:30.538 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:30.538 LIB libspdk_keyring_linux.a 00:04:30.538 CC module/bdev/error/vbdev_error_rpc.o 00:04:30.538 LIB libspdk_scheduler_gscheduler.a 00:04:30.538 SO libspdk_keyring_linux.so.1.0 00:04:30.538 SO libspdk_scheduler_gscheduler.so.4.0 00:04:30.538 SYMLINK libspdk_keyring_linux.so 00:04:30.796 SYMLINK libspdk_scheduler_gscheduler.so 00:04:30.796 LIB libspdk_blobfs_bdev.a 00:04:30.796 LIB libspdk_bdev_delay.a 00:04:30.796 SO libspdk_blobfs_bdev.so.6.0 00:04:30.796 LIB libspdk_bdev_gpt.a 00:04:30.796 SO libspdk_bdev_delay.so.6.0 00:04:30.796 LIB libspdk_bdev_error.a 00:04:30.796 SO libspdk_bdev_gpt.so.6.0 00:04:30.796 CC module/bdev/lvol/vbdev_lvol.o 00:04:30.796 SO libspdk_bdev_error.so.6.0 00:04:30.796 SYMLINK libspdk_blobfs_bdev.so 00:04:30.796 SYMLINK libspdk_bdev_delay.so 00:04:30.796 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:30.796 CC module/bdev/null/bdev_null.o 00:04:30.796 CC module/bdev/malloc/bdev_malloc.o 00:04:30.796 SYMLINK libspdk_bdev_gpt.so 00:04:30.796 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:30.796 CC module/bdev/nvme/bdev_nvme.o 00:04:30.796 CC module/bdev/passthru/vbdev_passthru.o 00:04:30.796 SYMLINK libspdk_bdev_error.so 00:04:30.796 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:31.054 CC module/bdev/raid/bdev_raid.o 00:04:31.054 LIB libspdk_sock_uring.a 00:04:31.054 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:31.054 SO libspdk_sock_uring.so.5.0 00:04:31.054 CC module/bdev/raid/bdev_raid_rpc.o 00:04:31.054 CC module/bdev/null/bdev_null_rpc.o 00:04:31.054 SYMLINK libspdk_sock_uring.so 00:04:31.313 LIB libspdk_bdev_passthru.a 00:04:31.313 LIB libspdk_bdev_malloc.a 00:04:31.313 SO libspdk_bdev_passthru.so.6.0 00:04:31.313 SO libspdk_bdev_malloc.so.6.0 00:04:31.313 LIB libspdk_bdev_null.a 00:04:31.313 CC module/bdev/split/vbdev_split.o 00:04:31.313 SYMLINK libspdk_bdev_passthru.so 00:04:31.313 SO libspdk_bdev_null.so.6.0 00:04:31.313 LIB libspdk_bdev_lvol.a 00:04:31.313 SYMLINK libspdk_bdev_malloc.so 00:04:31.313 CC module/bdev/raid/bdev_raid_sb.o 00:04:31.313 CC module/bdev/raid/raid0.o 00:04:31.313 SO libspdk_bdev_lvol.so.6.0 00:04:31.313 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:31.313 SYMLINK libspdk_bdev_null.so 00:04:31.313 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:31.571 SYMLINK libspdk_bdev_lvol.so 00:04:31.571 CC module/bdev/split/vbdev_split_rpc.o 00:04:31.571 CC module/bdev/uring/bdev_uring.o 00:04:31.571 CC module/bdev/uring/bdev_uring_rpc.o 00:04:31.571 CC module/bdev/raid/raid1.o 00:04:31.571 CC module/bdev/raid/concat.o 00:04:31.571 LIB libspdk_bdev_split.a 00:04:31.571 SO libspdk_bdev_split.so.6.0 00:04:31.830 CC module/bdev/nvme/nvme_rpc.o 00:04:31.830 LIB libspdk_bdev_zone_block.a 00:04:31.830 SYMLINK libspdk_bdev_split.so 00:04:31.830 CC module/bdev/aio/bdev_aio.o 00:04:31.830 SO libspdk_bdev_zone_block.so.6.0 00:04:31.830 SYMLINK libspdk_bdev_zone_block.so 00:04:31.830 LIB libspdk_bdev_uring.a 00:04:31.830 CC module/bdev/ftl/bdev_ftl.o 00:04:31.830 SO libspdk_bdev_uring.so.6.0 00:04:31.830 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:31.830 CC module/bdev/nvme/bdev_mdns_client.o 00:04:31.830 CC module/bdev/iscsi/bdev_iscsi.o 00:04:32.117 SYMLINK libspdk_bdev_uring.so 00:04:32.117 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:32.117 CC module/bdev/aio/bdev_aio_rpc.o 00:04:32.117 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:32.117 LIB libspdk_bdev_raid.a 00:04:32.117 SO libspdk_bdev_raid.so.6.0 00:04:32.117 CC module/bdev/nvme/vbdev_opal.o 00:04:32.117 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:32.117 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:32.117 SYMLINK libspdk_bdev_raid.so 00:04:32.117 LIB libspdk_bdev_ftl.a 00:04:32.117 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:32.117 SO libspdk_bdev_ftl.so.6.0 00:04:32.117 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:32.376 LIB libspdk_bdev_aio.a 00:04:32.376 SYMLINK libspdk_bdev_ftl.so 00:04:32.376 SO libspdk_bdev_aio.so.6.0 00:04:32.376 LIB libspdk_bdev_iscsi.a 00:04:32.376 SYMLINK libspdk_bdev_aio.so 00:04:32.376 SO libspdk_bdev_iscsi.so.6.0 00:04:32.376 SYMLINK libspdk_bdev_iscsi.so 00:04:32.635 LIB libspdk_bdev_virtio.a 00:04:32.635 SO libspdk_bdev_virtio.so.6.0 00:04:32.635 SYMLINK libspdk_bdev_virtio.so 00:04:33.571 LIB libspdk_bdev_nvme.a 00:04:33.571 SO libspdk_bdev_nvme.so.7.1 00:04:33.830 SYMLINK libspdk_bdev_nvme.so 00:04:34.399 CC module/event/subsystems/fsdev/fsdev.o 00:04:34.399 CC module/event/subsystems/vmd/vmd.o 00:04:34.399 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:34.399 CC module/event/subsystems/keyring/keyring.o 00:04:34.399 CC module/event/subsystems/scheduler/scheduler.o 00:04:34.399 CC module/event/subsystems/sock/sock.o 00:04:34.399 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:34.399 CC module/event/subsystems/iobuf/iobuf.o 00:04:34.399 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:34.399 LIB libspdk_event_keyring.a 00:04:34.399 LIB libspdk_event_fsdev.a 00:04:34.399 LIB libspdk_event_vhost_blk.a 00:04:34.399 LIB libspdk_event_vmd.a 00:04:34.399 LIB libspdk_event_sock.a 00:04:34.399 LIB libspdk_event_scheduler.a 00:04:34.399 SO libspdk_event_keyring.so.1.0 00:04:34.399 SO libspdk_event_fsdev.so.1.0 00:04:34.399 LIB libspdk_event_iobuf.a 00:04:34.399 SO libspdk_event_vhost_blk.so.3.0 00:04:34.399 SO libspdk_event_sock.so.5.0 00:04:34.399 SO libspdk_event_vmd.so.6.0 00:04:34.399 SO libspdk_event_scheduler.so.4.0 00:04:34.399 SO libspdk_event_iobuf.so.3.0 00:04:34.399 SYMLINK libspdk_event_keyring.so 00:04:34.399 SYMLINK libspdk_event_sock.so 00:04:34.399 SYMLINK libspdk_event_fsdev.so 00:04:34.399 SYMLINK libspdk_event_vmd.so 00:04:34.399 SYMLINK libspdk_event_vhost_blk.so 00:04:34.400 SYMLINK libspdk_event_scheduler.so 00:04:34.400 SYMLINK libspdk_event_iobuf.so 00:04:34.967 CC module/event/subsystems/accel/accel.o 00:04:34.967 LIB libspdk_event_accel.a 00:04:34.967 SO libspdk_event_accel.so.6.0 00:04:34.967 SYMLINK libspdk_event_accel.so 00:04:35.534 CC module/event/subsystems/bdev/bdev.o 00:04:35.534 LIB libspdk_event_bdev.a 00:04:35.534 SO libspdk_event_bdev.so.6.0 00:04:35.792 SYMLINK libspdk_event_bdev.so 00:04:35.792 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:35.792 CC module/event/subsystems/nbd/nbd.o 00:04:35.792 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:35.792 CC module/event/subsystems/scsi/scsi.o 00:04:35.792 CC module/event/subsystems/ublk/ublk.o 00:04:36.050 LIB libspdk_event_nbd.a 00:04:36.050 LIB libspdk_event_ublk.a 00:04:36.050 LIB libspdk_event_scsi.a 00:04:36.050 SO libspdk_event_nbd.so.6.0 00:04:36.050 SO libspdk_event_ublk.so.3.0 00:04:36.050 SO libspdk_event_scsi.so.6.0 00:04:36.050 SYMLINK libspdk_event_nbd.so 00:04:36.308 SYMLINK libspdk_event_ublk.so 00:04:36.308 SYMLINK libspdk_event_scsi.so 00:04:36.308 LIB libspdk_event_nvmf.a 00:04:36.308 SO libspdk_event_nvmf.so.6.0 00:04:36.308 SYMLINK libspdk_event_nvmf.so 00:04:36.308 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:36.308 CC module/event/subsystems/iscsi/iscsi.o 00:04:36.584 LIB libspdk_event_vhost_scsi.a 00:04:36.584 LIB libspdk_event_iscsi.a 00:04:36.584 SO libspdk_event_vhost_scsi.so.3.0 00:04:36.584 SO libspdk_event_iscsi.so.6.0 00:04:36.883 SYMLINK libspdk_event_vhost_scsi.so 00:04:36.883 SYMLINK libspdk_event_iscsi.so 00:04:36.883 SO libspdk.so.6.0 00:04:36.883 SYMLINK libspdk.so 00:04:37.141 CC app/trace_record/trace_record.o 00:04:37.141 CXX app/trace/trace.o 00:04:37.141 CC app/spdk_nvme_perf/perf.o 00:04:37.141 CC app/spdk_lspci/spdk_lspci.o 00:04:37.141 CC app/spdk_nvme_identify/identify.o 00:04:37.141 CC app/iscsi_tgt/iscsi_tgt.o 00:04:37.141 CC app/nvmf_tgt/nvmf_main.o 00:04:37.141 CC app/spdk_tgt/spdk_tgt.o 00:04:37.141 CC test/thread/poller_perf/poller_perf.o 00:04:37.399 CC examples/util/zipf/zipf.o 00:04:37.399 LINK spdk_lspci 00:04:37.399 LINK poller_perf 00:04:37.399 LINK nvmf_tgt 00:04:37.400 LINK iscsi_tgt 00:04:37.400 LINK zipf 00:04:37.400 LINK spdk_trace_record 00:04:37.658 LINK spdk_tgt 00:04:37.658 LINK spdk_trace 00:04:37.658 CC app/spdk_nvme_discover/discovery_aer.o 00:04:37.658 CC app/spdk_top/spdk_top.o 00:04:37.658 CC test/dma/test_dma/test_dma.o 00:04:37.915 CC app/spdk_dd/spdk_dd.o 00:04:37.916 CC examples/ioat/perf/perf.o 00:04:37.916 CC examples/vmd/lsvmd/lsvmd.o 00:04:37.916 LINK spdk_nvme_discover 00:04:37.916 CC app/fio/nvme/fio_plugin.o 00:04:37.916 CC examples/idxd/perf/perf.o 00:04:38.174 LINK spdk_nvme_identify 00:04:38.174 LINK lsvmd 00:04:38.174 LINK ioat_perf 00:04:38.174 LINK spdk_nvme_perf 00:04:38.174 CC app/fio/bdev/fio_plugin.o 00:04:38.174 LINK test_dma 00:04:38.174 LINK spdk_dd 00:04:38.174 CC examples/ioat/verify/verify.o 00:04:38.174 CC examples/vmd/led/led.o 00:04:38.432 LINK idxd_perf 00:04:38.432 CC app/vhost/vhost.o 00:04:38.432 LINK led 00:04:38.432 LINK spdk_nvme 00:04:38.432 LINK verify 00:04:38.690 LINK vhost 00:04:38.690 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:38.690 TEST_HEADER include/spdk/accel.h 00:04:38.690 TEST_HEADER include/spdk/accel_module.h 00:04:38.690 TEST_HEADER include/spdk/assert.h 00:04:38.690 TEST_HEADER include/spdk/barrier.h 00:04:38.690 TEST_HEADER include/spdk/base64.h 00:04:38.690 TEST_HEADER include/spdk/bdev.h 00:04:38.690 CC test/app/bdev_svc/bdev_svc.o 00:04:38.690 TEST_HEADER include/spdk/bdev_module.h 00:04:38.690 TEST_HEADER include/spdk/bdev_zone.h 00:04:38.690 TEST_HEADER include/spdk/bit_array.h 00:04:38.690 LINK spdk_top 00:04:38.690 TEST_HEADER include/spdk/bit_pool.h 00:04:38.690 TEST_HEADER include/spdk/blob_bdev.h 00:04:38.690 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:38.690 TEST_HEADER include/spdk/blobfs.h 00:04:38.690 TEST_HEADER include/spdk/blob.h 00:04:38.690 TEST_HEADER include/spdk/conf.h 00:04:38.690 TEST_HEADER include/spdk/config.h 00:04:38.690 TEST_HEADER include/spdk/cpuset.h 00:04:38.690 TEST_HEADER include/spdk/crc16.h 00:04:38.690 LINK spdk_bdev 00:04:38.690 TEST_HEADER include/spdk/crc32.h 00:04:38.690 TEST_HEADER include/spdk/crc64.h 00:04:38.690 TEST_HEADER include/spdk/dif.h 00:04:38.690 TEST_HEADER include/spdk/dma.h 00:04:38.690 TEST_HEADER include/spdk/endian.h 00:04:38.690 TEST_HEADER include/spdk/env_dpdk.h 00:04:38.690 TEST_HEADER include/spdk/env.h 00:04:38.690 TEST_HEADER include/spdk/event.h 00:04:38.690 TEST_HEADER include/spdk/fd_group.h 00:04:38.690 TEST_HEADER include/spdk/fd.h 00:04:38.690 TEST_HEADER include/spdk/file.h 00:04:38.690 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:38.690 TEST_HEADER include/spdk/fsdev.h 00:04:38.690 TEST_HEADER include/spdk/fsdev_module.h 00:04:38.690 TEST_HEADER include/spdk/ftl.h 00:04:38.690 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:38.690 TEST_HEADER include/spdk/gpt_spec.h 00:04:38.690 TEST_HEADER include/spdk/hexlify.h 00:04:38.690 TEST_HEADER include/spdk/histogram_data.h 00:04:38.690 TEST_HEADER include/spdk/idxd.h 00:04:38.690 TEST_HEADER include/spdk/idxd_spec.h 00:04:38.690 TEST_HEADER include/spdk/init.h 00:04:38.690 TEST_HEADER include/spdk/ioat.h 00:04:38.690 TEST_HEADER include/spdk/ioat_spec.h 00:04:38.690 TEST_HEADER include/spdk/iscsi_spec.h 00:04:38.690 TEST_HEADER include/spdk/json.h 00:04:38.690 TEST_HEADER include/spdk/jsonrpc.h 00:04:38.690 TEST_HEADER include/spdk/keyring.h 00:04:38.690 TEST_HEADER include/spdk/keyring_module.h 00:04:38.690 TEST_HEADER include/spdk/likely.h 00:04:38.690 TEST_HEADER include/spdk/log.h 00:04:38.690 TEST_HEADER include/spdk/lvol.h 00:04:38.690 TEST_HEADER include/spdk/md5.h 00:04:38.690 TEST_HEADER include/spdk/memory.h 00:04:38.690 TEST_HEADER include/spdk/mmio.h 00:04:38.690 TEST_HEADER include/spdk/nbd.h 00:04:38.690 TEST_HEADER include/spdk/net.h 00:04:38.690 TEST_HEADER include/spdk/notify.h 00:04:38.690 TEST_HEADER include/spdk/nvme.h 00:04:38.690 TEST_HEADER include/spdk/nvme_intel.h 00:04:38.690 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:38.690 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:38.690 TEST_HEADER include/spdk/nvme_spec.h 00:04:38.949 TEST_HEADER include/spdk/nvme_zns.h 00:04:38.949 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:38.949 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:38.949 TEST_HEADER include/spdk/nvmf.h 00:04:38.949 TEST_HEADER include/spdk/nvmf_spec.h 00:04:38.949 TEST_HEADER include/spdk/nvmf_transport.h 00:04:38.949 TEST_HEADER include/spdk/opal.h 00:04:38.949 TEST_HEADER include/spdk/opal_spec.h 00:04:38.949 TEST_HEADER include/spdk/pci_ids.h 00:04:38.949 TEST_HEADER include/spdk/pipe.h 00:04:38.949 TEST_HEADER include/spdk/queue.h 00:04:38.949 TEST_HEADER include/spdk/reduce.h 00:04:38.949 TEST_HEADER include/spdk/rpc.h 00:04:38.949 CC test/event/event_perf/event_perf.o 00:04:38.949 TEST_HEADER include/spdk/scheduler.h 00:04:38.949 TEST_HEADER include/spdk/scsi.h 00:04:38.949 TEST_HEADER include/spdk/scsi_spec.h 00:04:38.949 TEST_HEADER include/spdk/sock.h 00:04:38.949 TEST_HEADER include/spdk/stdinc.h 00:04:38.949 TEST_HEADER include/spdk/string.h 00:04:38.949 LINK interrupt_tgt 00:04:38.949 TEST_HEADER include/spdk/thread.h 00:04:38.949 TEST_HEADER include/spdk/trace.h 00:04:38.949 TEST_HEADER include/spdk/trace_parser.h 00:04:38.949 TEST_HEADER include/spdk/tree.h 00:04:38.949 LINK bdev_svc 00:04:38.949 TEST_HEADER include/spdk/ublk.h 00:04:38.949 TEST_HEADER include/spdk/util.h 00:04:38.949 TEST_HEADER include/spdk/uuid.h 00:04:38.949 TEST_HEADER include/spdk/version.h 00:04:38.949 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:38.949 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:38.949 TEST_HEADER include/spdk/vhost.h 00:04:38.949 TEST_HEADER include/spdk/vmd.h 00:04:38.949 TEST_HEADER include/spdk/xor.h 00:04:38.949 TEST_HEADER include/spdk/zipf.h 00:04:38.949 CXX test/cpp_headers/accel.o 00:04:38.949 CC test/nvme/aer/aer.o 00:04:38.949 CC test/app/histogram_perf/histogram_perf.o 00:04:38.949 CC test/event/reactor/reactor.o 00:04:38.949 CC test/env/mem_callbacks/mem_callbacks.o 00:04:38.949 CC test/event/reactor_perf/reactor_perf.o 00:04:38.949 LINK event_perf 00:04:38.949 CXX test/cpp_headers/accel_module.o 00:04:39.208 LINK histogram_perf 00:04:39.208 CXX test/cpp_headers/assert.o 00:04:39.208 LINK reactor 00:04:39.208 LINK reactor_perf 00:04:39.208 LINK nvme_fuzz 00:04:39.208 LINK aer 00:04:39.208 CXX test/cpp_headers/barrier.o 00:04:39.208 CC test/app/jsoncat/jsoncat.o 00:04:39.208 CC examples/thread/thread/thread_ex.o 00:04:39.208 CXX test/cpp_headers/base64.o 00:04:39.208 CXX test/cpp_headers/bdev.o 00:04:39.466 CC test/event/app_repeat/app_repeat.o 00:04:39.466 CC test/app/stub/stub.o 00:04:39.466 LINK jsoncat 00:04:39.466 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:39.466 CC test/nvme/reset/reset.o 00:04:39.466 CC test/env/vtophys/vtophys.o 00:04:39.466 CXX test/cpp_headers/bdev_module.o 00:04:39.466 LINK thread 00:04:39.466 LINK app_repeat 00:04:39.466 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:39.466 CXX test/cpp_headers/bdev_zone.o 00:04:39.466 LINK mem_callbacks 00:04:39.725 LINK stub 00:04:39.725 LINK vtophys 00:04:39.725 LINK env_dpdk_post_init 00:04:39.725 CXX test/cpp_headers/bit_array.o 00:04:39.725 LINK reset 00:04:39.725 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:39.725 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:39.725 CC test/env/memory/memory_ut.o 00:04:39.983 CC test/event/scheduler/scheduler.o 00:04:39.983 CC test/nvme/sgl/sgl.o 00:04:39.983 CXX test/cpp_headers/bit_pool.o 00:04:39.983 CC examples/sock/hello_world/hello_sock.o 00:04:39.983 CC test/env/pci/pci_ut.o 00:04:39.983 CC test/nvme/e2edp/nvme_dp.o 00:04:39.983 CC test/nvme/overhead/overhead.o 00:04:40.241 LINK scheduler 00:04:40.241 CXX test/cpp_headers/blob_bdev.o 00:04:40.241 LINK sgl 00:04:40.241 LINK hello_sock 00:04:40.241 LINK vhost_fuzz 00:04:40.241 CXX test/cpp_headers/blobfs_bdev.o 00:04:40.241 LINK nvme_dp 00:04:40.241 LINK overhead 00:04:40.500 LINK pci_ut 00:04:40.500 CC test/nvme/err_injection/err_injection.o 00:04:40.500 CC test/nvme/startup/startup.o 00:04:40.500 CC test/nvme/reserve/reserve.o 00:04:40.500 CXX test/cpp_headers/blobfs.o 00:04:40.500 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:40.500 CC test/nvme/simple_copy/simple_copy.o 00:04:40.500 LINK startup 00:04:40.757 LINK err_injection 00:04:40.757 CC test/nvme/connect_stress/connect_stress.o 00:04:40.757 LINK reserve 00:04:40.757 CXX test/cpp_headers/blob.o 00:04:40.757 CC test/nvme/boot_partition/boot_partition.o 00:04:40.757 LINK connect_stress 00:04:40.757 LINK simple_copy 00:04:40.757 LINK hello_fsdev 00:04:40.757 CC test/nvme/compliance/nvme_compliance.o 00:04:41.015 CXX test/cpp_headers/conf.o 00:04:41.015 LINK boot_partition 00:04:41.015 CC examples/accel/perf/accel_perf.o 00:04:41.015 CC test/nvme/fused_ordering/fused_ordering.o 00:04:41.015 LINK memory_ut 00:04:41.015 LINK iscsi_fuzz 00:04:41.015 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:41.015 CXX test/cpp_headers/config.o 00:04:41.015 CC test/nvme/fdp/fdp.o 00:04:41.015 CXX test/cpp_headers/cpuset.o 00:04:41.273 CC test/nvme/cuse/cuse.o 00:04:41.273 LINK fused_ordering 00:04:41.273 LINK nvme_compliance 00:04:41.273 CC test/rpc_client/rpc_client_test.o 00:04:41.273 CXX test/cpp_headers/crc16.o 00:04:41.273 LINK doorbell_aers 00:04:41.273 CXX test/cpp_headers/crc32.o 00:04:41.529 LINK rpc_client_test 00:04:41.529 LINK fdp 00:04:41.529 CC test/accel/dif/dif.o 00:04:41.529 LINK accel_perf 00:04:41.529 CC test/blobfs/mkfs/mkfs.o 00:04:41.529 CXX test/cpp_headers/crc64.o 00:04:41.529 CXX test/cpp_headers/dif.o 00:04:41.529 CC examples/blob/hello_world/hello_blob.o 00:04:41.529 CC examples/blob/cli/blobcli.o 00:04:41.529 CXX test/cpp_headers/dma.o 00:04:41.788 CXX test/cpp_headers/endian.o 00:04:41.788 CXX test/cpp_headers/env_dpdk.o 00:04:41.788 LINK mkfs 00:04:41.788 CC test/lvol/esnap/esnap.o 00:04:41.788 CXX test/cpp_headers/env.o 00:04:41.788 LINK hello_blob 00:04:41.788 CXX test/cpp_headers/event.o 00:04:42.046 CXX test/cpp_headers/fd_group.o 00:04:42.046 CC examples/nvme/hello_world/hello_world.o 00:04:42.046 CXX test/cpp_headers/fd.o 00:04:42.046 CXX test/cpp_headers/file.o 00:04:42.046 LINK blobcli 00:04:42.046 LINK dif 00:04:42.046 CXX test/cpp_headers/fsdev.o 00:04:42.046 CC examples/bdev/hello_world/hello_bdev.o 00:04:42.046 CC examples/nvme/reconnect/reconnect.o 00:04:42.304 LINK hello_world 00:04:42.304 CC examples/bdev/bdevperf/bdevperf.o 00:04:42.304 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:42.304 CXX test/cpp_headers/fsdev_module.o 00:04:42.304 CXX test/cpp_headers/ftl.o 00:04:42.304 CC examples/nvme/arbitration/arbitration.o 00:04:42.304 LINK hello_bdev 00:04:42.304 CC examples/nvme/hotplug/hotplug.o 00:04:42.561 LINK cuse 00:04:42.561 LINK reconnect 00:04:42.561 CXX test/cpp_headers/fuse_dispatcher.o 00:04:42.561 CXX test/cpp_headers/gpt_spec.o 00:04:42.561 LINK hotplug 00:04:42.561 CXX test/cpp_headers/hexlify.o 00:04:42.561 CXX test/cpp_headers/histogram_data.o 00:04:42.819 LINK arbitration 00:04:42.819 CC test/bdev/bdevio/bdevio.o 00:04:42.819 CXX test/cpp_headers/idxd.o 00:04:42.819 CXX test/cpp_headers/idxd_spec.o 00:04:42.819 LINK nvme_manage 00:04:42.819 CXX test/cpp_headers/init.o 00:04:42.819 CXX test/cpp_headers/ioat.o 00:04:42.819 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:43.077 CC examples/nvme/abort/abort.o 00:04:43.077 CXX test/cpp_headers/ioat_spec.o 00:04:43.077 CXX test/cpp_headers/iscsi_spec.o 00:04:43.077 CXX test/cpp_headers/json.o 00:04:43.077 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:43.077 CXX test/cpp_headers/jsonrpc.o 00:04:43.077 LINK cmb_copy 00:04:43.077 LINK bdevperf 00:04:43.077 LINK bdevio 00:04:43.077 CXX test/cpp_headers/keyring.o 00:04:43.077 CXX test/cpp_headers/keyring_module.o 00:04:43.335 CXX test/cpp_headers/likely.o 00:04:43.335 CXX test/cpp_headers/log.o 00:04:43.335 LINK pmr_persistence 00:04:43.335 CXX test/cpp_headers/lvol.o 00:04:43.335 CXX test/cpp_headers/md5.o 00:04:43.335 CXX test/cpp_headers/memory.o 00:04:43.335 CXX test/cpp_headers/nbd.o 00:04:43.335 CXX test/cpp_headers/mmio.o 00:04:43.335 LINK abort 00:04:43.335 CXX test/cpp_headers/net.o 00:04:43.335 CXX test/cpp_headers/notify.o 00:04:43.335 CXX test/cpp_headers/nvme.o 00:04:43.335 CXX test/cpp_headers/nvme_intel.o 00:04:43.335 CXX test/cpp_headers/nvme_ocssd.o 00:04:43.593 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:43.593 CXX test/cpp_headers/nvme_spec.o 00:04:43.593 CXX test/cpp_headers/nvme_zns.o 00:04:43.593 CXX test/cpp_headers/nvmf_cmd.o 00:04:43.593 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:43.593 CXX test/cpp_headers/nvmf.o 00:04:43.593 CXX test/cpp_headers/nvmf_spec.o 00:04:43.593 CXX test/cpp_headers/nvmf_transport.o 00:04:43.593 CXX test/cpp_headers/opal.o 00:04:43.593 CXX test/cpp_headers/opal_spec.o 00:04:43.851 CC examples/nvmf/nvmf/nvmf.o 00:04:43.851 CXX test/cpp_headers/pci_ids.o 00:04:43.851 CXX test/cpp_headers/pipe.o 00:04:43.851 CXX test/cpp_headers/queue.o 00:04:43.851 CXX test/cpp_headers/reduce.o 00:04:43.851 CXX test/cpp_headers/rpc.o 00:04:43.851 CXX test/cpp_headers/scheduler.o 00:04:43.851 CXX test/cpp_headers/scsi.o 00:04:43.851 CXX test/cpp_headers/scsi_spec.o 00:04:43.851 CXX test/cpp_headers/sock.o 00:04:43.851 CXX test/cpp_headers/stdinc.o 00:04:43.851 CXX test/cpp_headers/string.o 00:04:44.109 CXX test/cpp_headers/thread.o 00:04:44.109 CXX test/cpp_headers/trace.o 00:04:44.109 CXX test/cpp_headers/trace_parser.o 00:04:44.109 CXX test/cpp_headers/tree.o 00:04:44.109 CXX test/cpp_headers/ublk.o 00:04:44.109 CXX test/cpp_headers/util.o 00:04:44.109 LINK nvmf 00:04:44.109 CXX test/cpp_headers/uuid.o 00:04:44.109 CXX test/cpp_headers/version.o 00:04:44.109 CXX test/cpp_headers/vfio_user_pci.o 00:04:44.109 CXX test/cpp_headers/vfio_user_spec.o 00:04:44.109 CXX test/cpp_headers/vhost.o 00:04:44.109 CXX test/cpp_headers/vmd.o 00:04:44.109 CXX test/cpp_headers/xor.o 00:04:44.368 CXX test/cpp_headers/zipf.o 00:04:47.655 LINK esnap 00:04:47.655 00:04:47.655 real 1m30.121s 00:04:47.655 user 8m14.012s 00:04:47.655 sys 1m38.083s 00:04:47.655 10:28:12 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:47.655 10:28:12 make -- common/autotest_common.sh@10 -- $ set +x 00:04:47.655 ************************************ 00:04:47.655 END TEST make 00:04:47.655 ************************************ 00:04:47.655 10:28:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:47.655 10:28:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:47.655 10:28:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:47.655 10:28:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.655 10:28:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:47.655 10:28:12 -- pm/common@44 -- $ pid=5305 00:04:47.655 10:28:12 -- pm/common@50 -- $ kill -TERM 5305 00:04:47.655 10:28:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.655 10:28:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:47.655 10:28:12 -- pm/common@44 -- $ pid=5307 00:04:47.655 10:28:12 -- pm/common@50 -- $ kill -TERM 5307 00:04:47.655 10:28:12 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:47.655 10:28:12 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:47.655 10:28:12 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:47.655 10:28:12 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:47.655 10:28:12 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:47.655 10:28:13 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:47.655 10:28:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.655 10:28:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.655 10:28:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.655 10:28:13 -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.655 10:28:13 -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.655 10:28:13 -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.655 10:28:13 -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.655 10:28:13 -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.655 10:28:13 -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.655 10:28:13 -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.655 10:28:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.655 10:28:13 -- scripts/common.sh@344 -- # case "$op" in 00:04:47.655 10:28:13 -- scripts/common.sh@345 -- # : 1 00:04:47.655 10:28:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.655 10:28:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.655 10:28:13 -- scripts/common.sh@365 -- # decimal 1 00:04:47.655 10:28:13 -- scripts/common.sh@353 -- # local d=1 00:04:47.655 10:28:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.655 10:28:13 -- scripts/common.sh@355 -- # echo 1 00:04:47.655 10:28:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.655 10:28:13 -- scripts/common.sh@366 -- # decimal 2 00:04:47.655 10:28:13 -- scripts/common.sh@353 -- # local d=2 00:04:47.655 10:28:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.655 10:28:13 -- scripts/common.sh@355 -- # echo 2 00:04:47.655 10:28:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.655 10:28:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.655 10:28:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.655 10:28:13 -- scripts/common.sh@368 -- # return 0 00:04:47.655 10:28:13 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.655 10:28:13 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:47.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.655 --rc genhtml_branch_coverage=1 00:04:47.655 --rc genhtml_function_coverage=1 00:04:47.655 --rc genhtml_legend=1 00:04:47.655 --rc geninfo_all_blocks=1 00:04:47.655 --rc geninfo_unexecuted_blocks=1 00:04:47.655 00:04:47.655 ' 00:04:47.655 10:28:13 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:47.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.655 --rc genhtml_branch_coverage=1 00:04:47.655 --rc genhtml_function_coverage=1 00:04:47.655 --rc genhtml_legend=1 00:04:47.655 --rc geninfo_all_blocks=1 00:04:47.655 --rc geninfo_unexecuted_blocks=1 00:04:47.655 00:04:47.655 ' 00:04:47.655 10:28:13 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:47.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.655 --rc genhtml_branch_coverage=1 00:04:47.655 --rc genhtml_function_coverage=1 00:04:47.655 --rc genhtml_legend=1 00:04:47.655 --rc geninfo_all_blocks=1 00:04:47.655 --rc geninfo_unexecuted_blocks=1 00:04:47.655 00:04:47.655 ' 00:04:47.655 10:28:13 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:47.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.655 --rc genhtml_branch_coverage=1 00:04:47.655 --rc genhtml_function_coverage=1 00:04:47.655 --rc genhtml_legend=1 00:04:47.655 --rc geninfo_all_blocks=1 00:04:47.655 --rc geninfo_unexecuted_blocks=1 00:04:47.655 00:04:47.655 ' 00:04:47.655 10:28:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:47.655 10:28:13 -- nvmf/common.sh@7 -- # uname -s 00:04:47.655 10:28:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.655 10:28:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.655 10:28:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.655 10:28:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.655 10:28:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.655 10:28:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.655 10:28:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.655 10:28:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.655 10:28:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.655 10:28:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.655 10:28:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:04:47.655 10:28:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:04:47.655 10:28:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.655 10:28:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.655 10:28:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:47.655 10:28:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.655 10:28:13 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:47.655 10:28:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.655 10:28:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.655 10:28:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.655 10:28:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.655 10:28:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.655 10:28:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.655 10:28:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.655 10:28:13 -- paths/export.sh@5 -- # export PATH 00:04:47.655 10:28:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.655 10:28:13 -- nvmf/common.sh@51 -- # : 0 00:04:47.656 10:28:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.656 10:28:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.656 10:28:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.656 10:28:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.656 10:28:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.656 10:28:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.656 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.656 10:28:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.656 10:28:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.656 10:28:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.656 10:28:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:47.656 10:28:13 -- spdk/autotest.sh@32 -- # uname -s 00:04:47.656 10:28:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:47.656 10:28:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:47.656 10:28:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:47.656 10:28:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:47.656 10:28:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:47.656 10:28:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:47.656 10:28:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:47.656 10:28:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:47.656 10:28:13 -- spdk/autotest.sh@48 -- # udevadm_pid=54414 00:04:47.656 10:28:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:47.656 10:28:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:47.656 10:28:13 -- pm/common@17 -- # local monitor 00:04:47.656 10:28:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.656 10:28:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.656 10:28:13 -- pm/common@25 -- # sleep 1 00:04:47.656 10:28:13 -- pm/common@21 -- # date +%s 00:04:47.656 10:28:13 -- pm/common@21 -- # date +%s 00:04:47.656 10:28:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731666493 00:04:47.656 10:28:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731666493 00:04:47.914 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731666493_collect-cpu-load.pm.log 00:04:47.914 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731666493_collect-vmstat.pm.log 00:04:48.863 10:28:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:48.863 10:28:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:48.863 10:28:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:48.863 10:28:14 -- common/autotest_common.sh@10 -- # set +x 00:04:48.863 10:28:14 -- spdk/autotest.sh@59 -- # create_test_list 00:04:48.863 10:28:14 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:48.863 10:28:14 -- common/autotest_common.sh@10 -- # set +x 00:04:48.863 10:28:14 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:48.863 10:28:14 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:48.863 10:28:14 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:48.863 10:28:14 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:48.863 10:28:14 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:48.863 10:28:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:48.863 10:28:14 -- common/autotest_common.sh@1455 -- # uname 00:04:48.863 10:28:14 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:48.863 10:28:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:48.863 10:28:14 -- common/autotest_common.sh@1475 -- # uname 00:04:48.863 10:28:14 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:48.863 10:28:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:48.863 10:28:14 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:48.863 lcov: LCOV version 1.15 00:04:48.863 10:28:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:06.948 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:06.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:21.844 10:28:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:21.844 10:28:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:21.844 10:28:47 -- common/autotest_common.sh@10 -- # set +x 00:05:21.844 10:28:47 -- spdk/autotest.sh@78 -- # rm -f 00:05:21.844 10:28:47 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:22.779 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.779 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:22.779 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:22.779 10:28:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:22.779 10:28:48 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:22.779 10:28:48 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:22.779 10:28:48 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:22.779 10:28:48 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:22.779 10:28:48 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:22.779 10:28:48 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:22.779 10:28:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:22.779 10:28:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:22.779 10:28:48 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:22.780 10:28:48 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:22.780 10:28:48 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:22.780 10:28:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:22.780 10:28:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:22.780 10:28:48 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:22.780 10:28:48 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:22.780 10:28:48 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:22.780 10:28:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:22.780 10:28:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:22.780 10:28:48 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:22.780 10:28:48 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:22.780 10:28:48 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:22.780 10:28:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:22.780 10:28:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:22.780 10:28:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:22.780 10:28:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:22.780 10:28:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:22.780 10:28:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:22.780 10:28:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:22.780 10:28:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:22.780 No valid GPT data, bailing 00:05:22.780 10:28:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:22.780 10:28:48 -- scripts/common.sh@394 -- # pt= 00:05:22.780 10:28:48 -- scripts/common.sh@395 -- # return 1 00:05:22.780 10:28:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:22.780 1+0 records in 00:05:22.780 1+0 records out 00:05:22.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480111 s, 218 MB/s 00:05:22.780 10:28:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:22.780 10:28:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:22.780 10:28:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:22.780 10:28:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:22.780 10:28:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:22.780 No valid GPT data, bailing 00:05:22.780 10:28:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:22.780 10:28:48 -- scripts/common.sh@394 -- # pt= 00:05:22.780 10:28:48 -- scripts/common.sh@395 -- # return 1 00:05:22.780 10:28:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:22.780 1+0 records in 00:05:22.780 1+0 records out 00:05:22.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460548 s, 228 MB/s 00:05:22.780 10:28:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:22.780 10:28:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:22.780 10:28:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:22.780 10:28:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:22.780 10:28:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:22.780 No valid GPT data, bailing 00:05:22.780 10:28:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:22.780 10:28:48 -- scripts/common.sh@394 -- # pt= 00:05:22.780 10:28:48 -- scripts/common.sh@395 -- # return 1 00:05:22.780 10:28:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:23.038 1+0 records in 00:05:23.038 1+0 records out 00:05:23.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050908 s, 206 MB/s 00:05:23.038 10:28:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.038 10:28:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.038 10:28:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:23.038 10:28:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:23.038 10:28:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:23.038 No valid GPT data, bailing 00:05:23.038 10:28:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:23.038 10:28:48 -- scripts/common.sh@394 -- # pt= 00:05:23.038 10:28:48 -- scripts/common.sh@395 -- # return 1 00:05:23.038 10:28:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:23.038 1+0 records in 00:05:23.038 1+0 records out 00:05:23.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469575 s, 223 MB/s 00:05:23.038 10:28:48 -- spdk/autotest.sh@105 -- # sync 00:05:23.038 10:28:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:23.038 10:28:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:23.038 10:28:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:24.938 10:28:50 -- spdk/autotest.sh@111 -- # uname -s 00:05:24.938 10:28:50 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:24.938 10:28:50 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:24.938 10:28:50 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:25.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.505 Hugepages 00:05:25.505 node hugesize free / total 00:05:25.505 node0 1048576kB 0 / 0 00:05:25.505 node0 2048kB 0 / 0 00:05:25.505 00:05:25.505 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:25.764 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:25.764 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:25.764 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:25.764 10:28:51 -- spdk/autotest.sh@117 -- # uname -s 00:05:25.764 10:28:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:25.764 10:28:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:25.764 10:28:51 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.589 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.589 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.589 10:28:52 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:27.962 10:28:53 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:27.962 10:28:53 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:27.962 10:28:53 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:27.962 10:28:53 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:27.962 10:28:53 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:27.962 10:28:53 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:27.962 10:28:53 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.962 10:28:53 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:27.962 10:28:53 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:27.962 10:28:53 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:27.962 10:28:53 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:27.962 10:28:53 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:27.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.962 Waiting for block devices as requested 00:05:27.962 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:28.220 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:28.220 10:28:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:28.220 10:28:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:28.220 10:28:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:28.220 10:28:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:28.220 10:28:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:28.220 10:28:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:28.220 10:28:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:28.220 10:28:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:28.220 10:28:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:28.220 10:28:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:28.220 10:28:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:28.220 10:28:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:28.220 10:28:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:28.220 10:28:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:28.220 10:28:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:28.220 10:28:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:28.220 10:28:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:28.220 10:28:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:28.220 10:28:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:28.220 10:28:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:28.220 10:28:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:28.220 10:28:53 -- common/autotest_common.sh@1541 -- # continue 00:05:28.220 10:28:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:28.220 10:28:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:28.220 10:28:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:28.220 10:28:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:28.220 10:28:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:28.220 10:28:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:28.220 10:28:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:28.220 10:28:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:28.220 10:28:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:28.220 10:28:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:28.220 10:28:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:28.220 10:28:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:28.220 10:28:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:28.220 10:28:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:28.220 10:28:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:28.220 10:28:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:28.220 10:28:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:28.220 10:28:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:28.220 10:28:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:28.220 10:28:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:28.220 10:28:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:28.220 10:28:53 -- common/autotest_common.sh@1541 -- # continue 00:05:28.220 10:28:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:28.220 10:28:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:28.220 10:28:53 -- common/autotest_common.sh@10 -- # set +x 00:05:28.478 10:28:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:28.478 10:28:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.478 10:28:53 -- common/autotest_common.sh@10 -- # set +x 00:05:28.478 10:28:53 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.043 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.043 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.043 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.301 10:28:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:29.301 10:28:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:29.301 10:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:29.301 10:28:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:29.301 10:28:54 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:29.301 10:28:54 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:29.301 10:28:54 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:29.301 10:28:54 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:29.301 10:28:54 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:29.301 10:28:54 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:29.301 10:28:54 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:29.301 10:28:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:29.301 10:28:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:29.301 10:28:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.301 10:28:54 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:29.301 10:28:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:29.301 10:28:54 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:29.301 10:28:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:29.301 10:28:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:29.301 10:28:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:29.301 10:28:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:29.301 10:28:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:29.301 10:28:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:29.301 10:28:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:29.301 10:28:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:29.301 10:28:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:29.301 10:28:54 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:29.301 10:28:54 -- common/autotest_common.sh@1570 -- # return 0 00:05:29.301 10:28:54 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:29.301 10:28:54 -- common/autotest_common.sh@1578 -- # return 0 00:05:29.301 10:28:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:29.301 10:28:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:29.301 10:28:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:29.301 10:28:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:29.301 10:28:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:29.301 10:28:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.301 10:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:29.301 10:28:54 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:29.301 10:28:54 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:29.301 10:28:54 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:29.301 10:28:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:29.301 10:28:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.301 10:28:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.301 10:28:54 -- common/autotest_common.sh@10 -- # set +x 00:05:29.301 ************************************ 00:05:29.301 START TEST env 00:05:29.301 ************************************ 00:05:29.301 10:28:54 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:29.301 * Looking for test storage... 00:05:29.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:29.301 10:28:54 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:29.301 10:28:54 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:29.301 10:28:54 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:29.559 10:28:54 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:29.559 10:28:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.559 10:28:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.559 10:28:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.559 10:28:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.559 10:28:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.559 10:28:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.559 10:28:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.559 10:28:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.559 10:28:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.559 10:28:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.559 10:28:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.559 10:28:54 env -- scripts/common.sh@344 -- # case "$op" in 00:05:29.559 10:28:54 env -- scripts/common.sh@345 -- # : 1 00:05:29.559 10:28:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.559 10:28:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.560 10:28:54 env -- scripts/common.sh@365 -- # decimal 1 00:05:29.560 10:28:54 env -- scripts/common.sh@353 -- # local d=1 00:05:29.560 10:28:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.560 10:28:54 env -- scripts/common.sh@355 -- # echo 1 00:05:29.560 10:28:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.560 10:28:54 env -- scripts/common.sh@366 -- # decimal 2 00:05:29.560 10:28:54 env -- scripts/common.sh@353 -- # local d=2 00:05:29.560 10:28:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.560 10:28:54 env -- scripts/common.sh@355 -- # echo 2 00:05:29.560 10:28:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.560 10:28:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.560 10:28:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.560 10:28:54 env -- scripts/common.sh@368 -- # return 0 00:05:29.560 10:28:54 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.560 10:28:54 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:29.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.560 --rc genhtml_branch_coverage=1 00:05:29.560 --rc genhtml_function_coverage=1 00:05:29.560 --rc genhtml_legend=1 00:05:29.560 --rc geninfo_all_blocks=1 00:05:29.560 --rc geninfo_unexecuted_blocks=1 00:05:29.560 00:05:29.560 ' 00:05:29.560 10:28:54 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:29.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.560 --rc genhtml_branch_coverage=1 00:05:29.560 --rc genhtml_function_coverage=1 00:05:29.560 --rc genhtml_legend=1 00:05:29.560 --rc geninfo_all_blocks=1 00:05:29.560 --rc geninfo_unexecuted_blocks=1 00:05:29.560 00:05:29.560 ' 00:05:29.560 10:28:54 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:29.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.560 --rc genhtml_branch_coverage=1 00:05:29.560 --rc genhtml_function_coverage=1 00:05:29.560 --rc genhtml_legend=1 00:05:29.560 --rc geninfo_all_blocks=1 00:05:29.560 --rc geninfo_unexecuted_blocks=1 00:05:29.560 00:05:29.560 ' 00:05:29.560 10:28:54 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:29.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.560 --rc genhtml_branch_coverage=1 00:05:29.560 --rc genhtml_function_coverage=1 00:05:29.560 --rc genhtml_legend=1 00:05:29.560 --rc geninfo_all_blocks=1 00:05:29.560 --rc geninfo_unexecuted_blocks=1 00:05:29.560 00:05:29.560 ' 00:05:29.560 10:28:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:29.560 10:28:54 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.560 10:28:54 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.560 10:28:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.560 ************************************ 00:05:29.560 START TEST env_memory 00:05:29.560 ************************************ 00:05:29.560 10:28:54 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:29.560 00:05:29.560 00:05:29.560 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.560 http://cunit.sourceforge.net/ 00:05:29.560 00:05:29.560 00:05:29.560 Suite: memory 00:05:29.560 Test: alloc and free memory map ...[2024-11-15 10:28:54.945033] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:29.560 passed 00:05:29.560 Test: mem map translation ...[2024-11-15 10:28:54.969756] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:29.560 [2024-11-15 10:28:54.969792] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:29.560 [2024-11-15 10:28:54.969848] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:29.560 [2024-11-15 10:28:54.969874] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:29.560 passed 00:05:29.560 Test: mem map registration ...[2024-11-15 10:28:55.019712] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:29.560 [2024-11-15 10:28:55.019775] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:29.560 passed 00:05:29.819 Test: mem map adjacent registrations ...passed 00:05:29.819 00:05:29.819 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.819 suites 1 1 n/a 0 0 00:05:29.819 tests 4 4 4 0 0 00:05:29.819 asserts 152 152 152 0 n/a 00:05:29.819 00:05:29.819 Elapsed time = 0.175 seconds 00:05:29.819 00:05:29.819 real 0m0.192s 00:05:29.819 user 0m0.176s 00:05:29.819 sys 0m0.013s 00:05:29.819 10:28:55 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.819 10:28:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:29.819 ************************************ 00:05:29.819 END TEST env_memory 00:05:29.819 ************************************ 00:05:29.819 10:28:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:29.819 10:28:55 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.819 10:28:55 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.819 10:28:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.819 ************************************ 00:05:29.819 START TEST env_vtophys 00:05:29.819 ************************************ 00:05:29.819 10:28:55 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:29.819 EAL: lib.eal log level changed from notice to debug 00:05:29.819 EAL: Detected lcore 0 as core 0 on socket 0 00:05:29.819 EAL: Detected lcore 1 as core 0 on socket 0 00:05:29.819 EAL: Detected lcore 2 as core 0 on socket 0 00:05:29.819 EAL: Detected lcore 3 as core 0 on socket 0 00:05:29.819 EAL: Detected lcore 4 as core 0 on socket 0 00:05:29.819 EAL: Detected lcore 5 as core 0 on socket 0 00:05:29.819 EAL: Detected lcore 6 as core 0 on socket 0 00:05:29.819 EAL: Detected lcore 7 as core 0 on socket 0 00:05:29.819 EAL: Detected lcore 8 as core 0 on socket 0 00:05:29.819 EAL: Detected lcore 9 as core 0 on socket 0 00:05:29.819 EAL: Maximum logical cores by configuration: 128 00:05:29.819 EAL: Detected CPU lcores: 10 00:05:29.819 EAL: Detected NUMA nodes: 1 00:05:29.819 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:29.819 EAL: Detected shared linkage of DPDK 00:05:29.819 EAL: No shared files mode enabled, IPC will be disabled 00:05:29.819 EAL: Selected IOVA mode 'PA' 00:05:29.819 EAL: Probing VFIO support... 00:05:29.819 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:29.819 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:29.819 EAL: Ask a virtual area of 0x2e000 bytes 00:05:29.819 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:29.819 EAL: Setting up physically contiguous memory... 00:05:29.819 EAL: Setting maximum number of open files to 524288 00:05:29.819 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:29.819 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:29.819 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.819 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:29.819 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.819 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.819 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:29.819 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:29.819 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.819 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:29.819 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.819 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.819 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:29.819 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:29.819 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.819 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:29.819 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.819 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.819 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:29.819 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:29.819 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.819 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:29.819 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.819 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.819 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:29.819 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:29.819 EAL: Hugepages will be freed exactly as allocated. 00:05:29.819 EAL: No shared files mode enabled, IPC is disabled 00:05:29.819 EAL: No shared files mode enabled, IPC is disabled 00:05:29.819 EAL: TSC frequency is ~2200000 KHz 00:05:29.819 EAL: Main lcore 0 is ready (tid=7f4d57916a00;cpuset=[0]) 00:05:29.819 EAL: Trying to obtain current memory policy. 00:05:29.819 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.819 EAL: Restoring previous memory policy: 0 00:05:29.819 EAL: request: mp_malloc_sync 00:05:29.819 EAL: No shared files mode enabled, IPC is disabled 00:05:29.819 EAL: Heap on socket 0 was expanded by 2MB 00:05:29.819 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:29.819 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:29.819 EAL: Mem event callback 'spdk:(nil)' registered 00:05:29.819 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:30.078 00:05:30.078 00:05:30.078 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.078 http://cunit.sourceforge.net/ 00:05:30.078 00:05:30.078 00:05:30.078 Suite: components_suite 00:05:30.078 Test: vtophys_malloc_test ...passed 00:05:30.078 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:30.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.078 EAL: Restoring previous memory policy: 4 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was expanded by 4MB 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was shrunk by 4MB 00:05:30.078 EAL: Trying to obtain current memory policy. 00:05:30.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.078 EAL: Restoring previous memory policy: 4 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was expanded by 6MB 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was shrunk by 6MB 00:05:30.078 EAL: Trying to obtain current memory policy. 00:05:30.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.078 EAL: Restoring previous memory policy: 4 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was expanded by 10MB 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was shrunk by 10MB 00:05:30.078 EAL: Trying to obtain current memory policy. 00:05:30.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.078 EAL: Restoring previous memory policy: 4 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was expanded by 18MB 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was shrunk by 18MB 00:05:30.078 EAL: Trying to obtain current memory policy. 00:05:30.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.078 EAL: Restoring previous memory policy: 4 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was expanded by 34MB 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was shrunk by 34MB 00:05:30.078 EAL: Trying to obtain current memory policy. 00:05:30.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.078 EAL: Restoring previous memory policy: 4 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was expanded by 66MB 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was shrunk by 66MB 00:05:30.078 EAL: Trying to obtain current memory policy. 00:05:30.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.078 EAL: Restoring previous memory policy: 4 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was expanded by 130MB 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was shrunk by 130MB 00:05:30.078 EAL: Trying to obtain current memory policy. 00:05:30.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.078 EAL: Restoring previous memory policy: 4 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.078 EAL: request: mp_malloc_sync 00:05:30.078 EAL: No shared files mode enabled, IPC is disabled 00:05:30.078 EAL: Heap on socket 0 was expanded by 258MB 00:05:30.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.337 EAL: request: mp_malloc_sync 00:05:30.337 EAL: No shared files mode enabled, IPC is disabled 00:05:30.337 EAL: Heap on socket 0 was shrunk by 258MB 00:05:30.337 EAL: Trying to obtain current memory policy. 00:05:30.337 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.337 EAL: Restoring previous memory policy: 4 00:05:30.337 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.337 EAL: request: mp_malloc_sync 00:05:30.337 EAL: No shared files mode enabled, IPC is disabled 00:05:30.337 EAL: Heap on socket 0 was expanded by 514MB 00:05:30.595 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.595 EAL: request: mp_malloc_sync 00:05:30.595 EAL: No shared files mode enabled, IPC is disabled 00:05:30.595 EAL: Heap on socket 0 was shrunk by 514MB 00:05:30.595 EAL: Trying to obtain current memory policy. 00:05:30.595 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.855 EAL: Restoring previous memory policy: 4 00:05:30.855 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.855 EAL: request: mp_malloc_sync 00:05:30.855 EAL: No shared files mode enabled, IPC is disabled 00:05:30.855 EAL: Heap on socket 0 was expanded by 1026MB 00:05:31.113 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.372 EAL: request: mp_malloc_sync 00:05:31.372 EAL: No shared files mode enabled, IPC is disabled 00:05:31.372 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:31.372 passed 00:05:31.372 00:05:31.372 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.372 suites 1 1 n/a 0 0 00:05:31.372 tests 2 2 2 0 0 00:05:31.372 asserts 5379 5379 5379 0 n/a 00:05:31.372 00:05:31.372 Elapsed time = 1.264 seconds 00:05:31.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.372 EAL: request: mp_malloc_sync 00:05:31.372 EAL: No shared files mode enabled, IPC is disabled 00:05:31.372 EAL: Heap on socket 0 was shrunk by 2MB 00:05:31.372 EAL: No shared files mode enabled, IPC is disabled 00:05:31.372 EAL: No shared files mode enabled, IPC is disabled 00:05:31.373 EAL: No shared files mode enabled, IPC is disabled 00:05:31.373 00:05:31.373 real 0m1.473s 00:05:31.373 user 0m0.814s 00:05:31.373 sys 0m0.529s 00:05:31.373 10:28:56 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.373 10:28:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:31.373 ************************************ 00:05:31.373 END TEST env_vtophys 00:05:31.373 ************************************ 00:05:31.373 10:28:56 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:31.373 10:28:56 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.373 10:28:56 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.373 10:28:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.373 ************************************ 00:05:31.373 START TEST env_pci 00:05:31.373 ************************************ 00:05:31.373 10:28:56 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:31.373 00:05:31.373 00:05:31.373 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.373 http://cunit.sourceforge.net/ 00:05:31.373 00:05:31.373 00:05:31.373 Suite: pci 00:05:31.373 Test: pci_hook ...[2024-11-15 10:28:56.691039] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56657 has claimed it 00:05:31.373 passed 00:05:31.373 00:05:31.373 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.373 suites 1 1 n/a 0 0 00:05:31.373 tests 1 1 1 0 0 00:05:31.373 asserts 25 25 25 0 n/a 00:05:31.373 00:05:31.373 Elapsed time = 0.002 seconds 00:05:31.373 EAL: Cannot find device (10000:00:01.0) 00:05:31.373 EAL: Failed to attach device on primary process 00:05:31.373 00:05:31.373 real 0m0.022s 00:05:31.373 user 0m0.011s 00:05:31.373 sys 0m0.010s 00:05:31.373 10:28:56 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.373 10:28:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:31.373 ************************************ 00:05:31.373 END TEST env_pci 00:05:31.373 ************************************ 00:05:31.373 10:28:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:31.373 10:28:56 env -- env/env.sh@15 -- # uname 00:05:31.373 10:28:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:31.373 10:28:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:31.373 10:28:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.373 10:28:56 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:31.373 10:28:56 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.373 10:28:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.373 ************************************ 00:05:31.373 START TEST env_dpdk_post_init 00:05:31.373 ************************************ 00:05:31.373 10:28:56 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.373 EAL: Detected CPU lcores: 10 00:05:31.373 EAL: Detected NUMA nodes: 1 00:05:31.373 EAL: Detected shared linkage of DPDK 00:05:31.373 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.373 EAL: Selected IOVA mode 'PA' 00:05:31.632 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.632 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:31.632 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:31.632 Starting DPDK initialization... 00:05:31.632 Starting SPDK post initialization... 00:05:31.632 SPDK NVMe probe 00:05:31.632 Attaching to 0000:00:10.0 00:05:31.632 Attaching to 0000:00:11.0 00:05:31.632 Attached to 0000:00:10.0 00:05:31.632 Attached to 0000:00:11.0 00:05:31.632 Cleaning up... 00:05:31.632 00:05:31.632 real 0m0.192s 00:05:31.632 user 0m0.049s 00:05:31.632 sys 0m0.042s 00:05:31.632 10:28:56 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.632 10:28:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.632 ************************************ 00:05:31.632 END TEST env_dpdk_post_init 00:05:31.632 ************************************ 00:05:31.632 10:28:56 env -- env/env.sh@26 -- # uname 00:05:31.632 10:28:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:31.632 10:28:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.632 10:28:56 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.632 10:28:56 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.632 10:28:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.632 ************************************ 00:05:31.632 START TEST env_mem_callbacks 00:05:31.632 ************************************ 00:05:31.632 10:28:56 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.632 EAL: Detected CPU lcores: 10 00:05:31.632 EAL: Detected NUMA nodes: 1 00:05:31.632 EAL: Detected shared linkage of DPDK 00:05:31.632 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.632 EAL: Selected IOVA mode 'PA' 00:05:31.890 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.891 00:05:31.891 00:05:31.891 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.891 http://cunit.sourceforge.net/ 00:05:31.891 00:05:31.891 00:05:31.891 Suite: memory 00:05:31.891 Test: test ... 00:05:31.891 register 0x200000200000 2097152 00:05:31.891 malloc 3145728 00:05:31.891 register 0x200000400000 4194304 00:05:31.891 buf 0x200000500000 len 3145728 PASSED 00:05:31.891 malloc 64 00:05:31.891 buf 0x2000004fff40 len 64 PASSED 00:05:31.891 malloc 4194304 00:05:31.891 register 0x200000800000 6291456 00:05:31.891 buf 0x200000a00000 len 4194304 PASSED 00:05:31.891 free 0x200000500000 3145728 00:05:31.891 free 0x2000004fff40 64 00:05:31.891 unregister 0x200000400000 4194304 PASSED 00:05:31.891 free 0x200000a00000 4194304 00:05:31.891 unregister 0x200000800000 6291456 PASSED 00:05:31.891 malloc 8388608 00:05:31.891 register 0x200000400000 10485760 00:05:31.891 buf 0x200000600000 len 8388608 PASSED 00:05:31.891 free 0x200000600000 8388608 00:05:31.891 unregister 0x200000400000 10485760 PASSED 00:05:31.891 passed 00:05:31.891 00:05:31.891 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.891 suites 1 1 n/a 0 0 00:05:31.891 tests 1 1 1 0 0 00:05:31.891 asserts 15 15 15 0 n/a 00:05:31.891 00:05:31.891 Elapsed time = 0.005 seconds 00:05:31.891 00:05:31.891 real 0m0.137s 00:05:31.891 user 0m0.014s 00:05:31.891 sys 0m0.023s 00:05:31.891 10:28:57 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.891 10:28:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:31.891 ************************************ 00:05:31.891 END TEST env_mem_callbacks 00:05:31.891 ************************************ 00:05:31.891 00:05:31.891 real 0m2.478s 00:05:31.891 user 0m1.261s 00:05:31.891 sys 0m0.866s 00:05:31.891 10:28:57 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.891 10:28:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.891 ************************************ 00:05:31.891 END TEST env 00:05:31.891 ************************************ 00:05:31.891 10:28:57 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:31.891 10:28:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.891 10:28:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.891 10:28:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.891 ************************************ 00:05:31.891 START TEST rpc 00:05:31.891 ************************************ 00:05:31.891 10:28:57 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:31.891 * Looking for test storage... 00:05:31.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.891 10:28:57 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.891 10:28:57 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.891 10:28:57 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:32.151 10:28:57 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:32.151 10:28:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.151 10:28:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.151 10:28:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.151 10:28:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.151 10:28:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.151 10:28:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.151 10:28:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.151 10:28:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.151 10:28:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.151 10:28:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.151 10:28:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.151 10:28:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:32.151 10:28:57 rpc -- scripts/common.sh@345 -- # : 1 00:05:32.151 10:28:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.151 10:28:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.151 10:28:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:32.151 10:28:57 rpc -- scripts/common.sh@353 -- # local d=1 00:05:32.151 10:28:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.151 10:28:57 rpc -- scripts/common.sh@355 -- # echo 1 00:05:32.151 10:28:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.151 10:28:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:32.151 10:28:57 rpc -- scripts/common.sh@353 -- # local d=2 00:05:32.151 10:28:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.151 10:28:57 rpc -- scripts/common.sh@355 -- # echo 2 00:05:32.151 10:28:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.151 10:28:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.151 10:28:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.151 10:28:57 rpc -- scripts/common.sh@368 -- # return 0 00:05:32.151 10:28:57 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.151 10:28:57 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:32.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.151 --rc genhtml_branch_coverage=1 00:05:32.151 --rc genhtml_function_coverage=1 00:05:32.151 --rc genhtml_legend=1 00:05:32.151 --rc geninfo_all_blocks=1 00:05:32.151 --rc geninfo_unexecuted_blocks=1 00:05:32.151 00:05:32.151 ' 00:05:32.151 10:28:57 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:32.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.151 --rc genhtml_branch_coverage=1 00:05:32.151 --rc genhtml_function_coverage=1 00:05:32.151 --rc genhtml_legend=1 00:05:32.151 --rc geninfo_all_blocks=1 00:05:32.151 --rc geninfo_unexecuted_blocks=1 00:05:32.151 00:05:32.151 ' 00:05:32.151 10:28:57 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:32.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.151 --rc genhtml_branch_coverage=1 00:05:32.151 --rc genhtml_function_coverage=1 00:05:32.151 --rc genhtml_legend=1 00:05:32.151 --rc geninfo_all_blocks=1 00:05:32.151 --rc geninfo_unexecuted_blocks=1 00:05:32.151 00:05:32.151 ' 00:05:32.151 10:28:57 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:32.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.151 --rc genhtml_branch_coverage=1 00:05:32.151 --rc genhtml_function_coverage=1 00:05:32.151 --rc genhtml_legend=1 00:05:32.151 --rc geninfo_all_blocks=1 00:05:32.151 --rc geninfo_unexecuted_blocks=1 00:05:32.151 00:05:32.151 ' 00:05:32.151 10:28:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56775 00:05:32.151 10:28:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.151 10:28:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56775 00:05:32.151 10:28:57 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:32.151 10:28:57 rpc -- common/autotest_common.sh@833 -- # '[' -z 56775 ']' 00:05:32.151 10:28:57 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.151 10:28:57 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:32.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.151 10:28:57 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.151 10:28:57 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:32.151 10:28:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.151 [2024-11-15 10:28:57.481124] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:32.151 [2024-11-15 10:28:57.481249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56775 ] 00:05:32.151 [2024-11-15 10:28:57.626708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.410 [2024-11-15 10:28:57.727235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:32.410 [2024-11-15 10:28:57.727356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56775' to capture a snapshot of events at runtime. 00:05:32.410 [2024-11-15 10:28:57.727371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:32.410 [2024-11-15 10:28:57.727383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:32.410 [2024-11-15 10:28:57.727393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56775 for offline analysis/debug. 00:05:32.410 [2024-11-15 10:28:57.728127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.410 [2024-11-15 10:28:57.842656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.743 10:28:58 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.743 10:28:58 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:32.743 10:28:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:32.743 10:28:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:32.743 10:28:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:32.743 10:28:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:32.743 10:28:58 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:32.743 10:28:58 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:32.743 10:28:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.743 ************************************ 00:05:32.743 START TEST rpc_integrity 00:05:32.743 ************************************ 00:05:32.743 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:32.743 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.743 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.743 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.743 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.743 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.743 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:32.743 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.743 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.743 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.743 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.743 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.743 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:32.743 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.743 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.743 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.743 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.743 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.743 { 00:05:32.743 "name": "Malloc0", 00:05:32.743 "aliases": [ 00:05:32.743 "a227dced-47ad-4ff3-9159-ddae43355ca4" 00:05:32.743 ], 00:05:32.743 "product_name": "Malloc disk", 00:05:32.743 "block_size": 512, 00:05:32.743 "num_blocks": 16384, 00:05:32.743 "uuid": "a227dced-47ad-4ff3-9159-ddae43355ca4", 00:05:32.743 "assigned_rate_limits": { 00:05:32.743 "rw_ios_per_sec": 0, 00:05:32.743 "rw_mbytes_per_sec": 0, 00:05:32.743 "r_mbytes_per_sec": 0, 00:05:32.743 "w_mbytes_per_sec": 0 00:05:32.743 }, 00:05:32.743 "claimed": false, 00:05:32.743 "zoned": false, 00:05:32.743 "supported_io_types": { 00:05:32.743 "read": true, 00:05:32.743 "write": true, 00:05:32.743 "unmap": true, 00:05:32.743 "flush": true, 00:05:32.743 "reset": true, 00:05:32.743 "nvme_admin": false, 00:05:32.743 "nvme_io": false, 00:05:32.743 "nvme_io_md": false, 00:05:32.743 "write_zeroes": true, 00:05:32.743 "zcopy": true, 00:05:32.743 "get_zone_info": false, 00:05:32.743 "zone_management": false, 00:05:32.743 "zone_append": false, 00:05:32.743 "compare": false, 00:05:32.743 "compare_and_write": false, 00:05:32.743 "abort": true, 00:05:32.743 "seek_hole": false, 00:05:32.743 "seek_data": false, 00:05:32.743 "copy": true, 00:05:32.743 "nvme_iov_md": false 00:05:32.743 }, 00:05:32.743 "memory_domains": [ 00:05:32.743 { 00:05:32.743 "dma_device_id": "system", 00:05:32.743 "dma_device_type": 1 00:05:32.743 }, 00:05:32.743 { 00:05:32.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.743 "dma_device_type": 2 00:05:32.743 } 00:05:32.743 ], 00:05:32.743 "driver_specific": {} 00:05:32.743 } 00:05:32.743 ]' 00:05:32.743 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:33.002 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.002 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.002 [2024-11-15 10:28:58.255781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:33.002 [2024-11-15 10:28:58.255858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.002 [2024-11-15 10:28:58.255883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x154d050 00:05:33.002 [2024-11-15 10:28:58.255894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.002 [2024-11-15 10:28:58.257707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.002 [2024-11-15 10:28:58.257748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.002 Passthru0 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.002 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.002 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.002 { 00:05:33.002 "name": "Malloc0", 00:05:33.002 "aliases": [ 00:05:33.002 "a227dced-47ad-4ff3-9159-ddae43355ca4" 00:05:33.002 ], 00:05:33.002 "product_name": "Malloc disk", 00:05:33.002 "block_size": 512, 00:05:33.002 "num_blocks": 16384, 00:05:33.002 "uuid": "a227dced-47ad-4ff3-9159-ddae43355ca4", 00:05:33.002 "assigned_rate_limits": { 00:05:33.002 "rw_ios_per_sec": 0, 00:05:33.002 "rw_mbytes_per_sec": 0, 00:05:33.002 "r_mbytes_per_sec": 0, 00:05:33.002 "w_mbytes_per_sec": 0 00:05:33.002 }, 00:05:33.002 "claimed": true, 00:05:33.002 "claim_type": "exclusive_write", 00:05:33.002 "zoned": false, 00:05:33.002 "supported_io_types": { 00:05:33.002 "read": true, 00:05:33.002 "write": true, 00:05:33.002 "unmap": true, 00:05:33.002 "flush": true, 00:05:33.002 "reset": true, 00:05:33.002 "nvme_admin": false, 00:05:33.002 "nvme_io": false, 00:05:33.002 "nvme_io_md": false, 00:05:33.002 "write_zeroes": true, 00:05:33.002 "zcopy": true, 00:05:33.002 "get_zone_info": false, 00:05:33.002 "zone_management": false, 00:05:33.002 "zone_append": false, 00:05:33.002 "compare": false, 00:05:33.002 "compare_and_write": false, 00:05:33.002 "abort": true, 00:05:33.002 "seek_hole": false, 00:05:33.002 "seek_data": false, 00:05:33.002 "copy": true, 00:05:33.002 "nvme_iov_md": false 00:05:33.002 }, 00:05:33.002 "memory_domains": [ 00:05:33.002 { 00:05:33.002 "dma_device_id": "system", 00:05:33.002 "dma_device_type": 1 00:05:33.002 }, 00:05:33.002 { 00:05:33.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.002 "dma_device_type": 2 00:05:33.002 } 00:05:33.002 ], 00:05:33.002 "driver_specific": {} 00:05:33.002 }, 00:05:33.002 { 00:05:33.002 "name": "Passthru0", 00:05:33.002 "aliases": [ 00:05:33.002 "4812cf1c-0908-59d6-a0b4-7ebbb92fc1e1" 00:05:33.002 ], 00:05:33.002 "product_name": "passthru", 00:05:33.002 "block_size": 512, 00:05:33.002 "num_blocks": 16384, 00:05:33.002 "uuid": "4812cf1c-0908-59d6-a0b4-7ebbb92fc1e1", 00:05:33.002 "assigned_rate_limits": { 00:05:33.002 "rw_ios_per_sec": 0, 00:05:33.002 "rw_mbytes_per_sec": 0, 00:05:33.002 "r_mbytes_per_sec": 0, 00:05:33.002 "w_mbytes_per_sec": 0 00:05:33.002 }, 00:05:33.002 "claimed": false, 00:05:33.002 "zoned": false, 00:05:33.002 "supported_io_types": { 00:05:33.002 "read": true, 00:05:33.002 "write": true, 00:05:33.002 "unmap": true, 00:05:33.002 "flush": true, 00:05:33.002 "reset": true, 00:05:33.002 "nvme_admin": false, 00:05:33.002 "nvme_io": false, 00:05:33.002 "nvme_io_md": false, 00:05:33.002 "write_zeroes": true, 00:05:33.002 "zcopy": true, 00:05:33.002 "get_zone_info": false, 00:05:33.002 "zone_management": false, 00:05:33.002 "zone_append": false, 00:05:33.002 "compare": false, 00:05:33.002 "compare_and_write": false, 00:05:33.002 "abort": true, 00:05:33.002 "seek_hole": false, 00:05:33.002 "seek_data": false, 00:05:33.002 "copy": true, 00:05:33.002 "nvme_iov_md": false 00:05:33.002 }, 00:05:33.002 "memory_domains": [ 00:05:33.002 { 00:05:33.002 "dma_device_id": "system", 00:05:33.002 "dma_device_type": 1 00:05:33.002 }, 00:05:33.002 { 00:05:33.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.002 "dma_device_type": 2 00:05:33.002 } 00:05:33.002 ], 00:05:33.002 "driver_specific": { 00:05:33.002 "passthru": { 00:05:33.002 "name": "Passthru0", 00:05:33.002 "base_bdev_name": "Malloc0" 00:05:33.002 } 00:05:33.002 } 00:05:33.002 } 00:05:33.002 ]' 00:05:33.002 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:33.002 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.002 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.002 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.002 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.002 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.002 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:33.002 10:28:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.002 00:05:33.002 real 0m0.370s 00:05:33.002 user 0m0.269s 00:05:33.002 sys 0m0.035s 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.002 ************************************ 00:05:33.002 END TEST rpc_integrity 00:05:33.002 10:28:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.002 ************************************ 00:05:33.002 10:28:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:33.002 10:28:58 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.002 10:28:58 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.002 10:28:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.261 ************************************ 00:05:33.261 START TEST rpc_plugins 00:05:33.261 ************************************ 00:05:33.261 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:33.261 10:28:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:33.261 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.261 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.261 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.261 10:28:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:33.261 10:28:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:33.261 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.261 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.261 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.261 10:28:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:33.261 { 00:05:33.261 "name": "Malloc1", 00:05:33.261 "aliases": [ 00:05:33.261 "43c640d7-3b08-4e2b-a7c2-887cd2abec6a" 00:05:33.261 ], 00:05:33.261 "product_name": "Malloc disk", 00:05:33.261 "block_size": 4096, 00:05:33.261 "num_blocks": 256, 00:05:33.261 "uuid": "43c640d7-3b08-4e2b-a7c2-887cd2abec6a", 00:05:33.261 "assigned_rate_limits": { 00:05:33.261 "rw_ios_per_sec": 0, 00:05:33.261 "rw_mbytes_per_sec": 0, 00:05:33.261 "r_mbytes_per_sec": 0, 00:05:33.261 "w_mbytes_per_sec": 0 00:05:33.261 }, 00:05:33.261 "claimed": false, 00:05:33.261 "zoned": false, 00:05:33.261 "supported_io_types": { 00:05:33.261 "read": true, 00:05:33.261 "write": true, 00:05:33.261 "unmap": true, 00:05:33.261 "flush": true, 00:05:33.261 "reset": true, 00:05:33.261 "nvme_admin": false, 00:05:33.261 "nvme_io": false, 00:05:33.261 "nvme_io_md": false, 00:05:33.261 "write_zeroes": true, 00:05:33.261 "zcopy": true, 00:05:33.261 "get_zone_info": false, 00:05:33.261 "zone_management": false, 00:05:33.261 "zone_append": false, 00:05:33.261 "compare": false, 00:05:33.261 "compare_and_write": false, 00:05:33.261 "abort": true, 00:05:33.261 "seek_hole": false, 00:05:33.261 "seek_data": false, 00:05:33.261 "copy": true, 00:05:33.261 "nvme_iov_md": false 00:05:33.261 }, 00:05:33.261 "memory_domains": [ 00:05:33.261 { 00:05:33.261 "dma_device_id": "system", 00:05:33.261 "dma_device_type": 1 00:05:33.261 }, 00:05:33.261 { 00:05:33.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.261 "dma_device_type": 2 00:05:33.261 } 00:05:33.261 ], 00:05:33.261 "driver_specific": {} 00:05:33.261 } 00:05:33.261 ]' 00:05:33.261 10:28:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:33.261 10:28:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:33.261 10:28:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:33.261 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.261 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.261 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.261 10:28:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:33.262 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.262 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.262 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.262 10:28:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:33.262 10:28:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:33.262 10:28:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:33.262 00:05:33.262 real 0m0.167s 00:05:33.262 user 0m0.113s 00:05:33.262 sys 0m0.015s 00:05:33.262 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.262 10:28:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.262 ************************************ 00:05:33.262 END TEST rpc_plugins 00:05:33.262 ************************************ 00:05:33.262 10:28:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:33.262 10:28:58 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.262 10:28:58 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.262 10:28:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.262 ************************************ 00:05:33.262 START TEST rpc_trace_cmd_test 00:05:33.262 ************************************ 00:05:33.262 10:28:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:33.262 10:28:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:33.262 10:28:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:33.262 10:28:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.262 10:28:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.262 10:28:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.262 10:28:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:33.262 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56775", 00:05:33.262 "tpoint_group_mask": "0x8", 00:05:33.262 "iscsi_conn": { 00:05:33.262 "mask": "0x2", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "scsi": { 00:05:33.262 "mask": "0x4", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "bdev": { 00:05:33.262 "mask": "0x8", 00:05:33.262 "tpoint_mask": "0xffffffffffffffff" 00:05:33.262 }, 00:05:33.262 "nvmf_rdma": { 00:05:33.262 "mask": "0x10", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "nvmf_tcp": { 00:05:33.262 "mask": "0x20", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "ftl": { 00:05:33.262 "mask": "0x40", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "blobfs": { 00:05:33.262 "mask": "0x80", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "dsa": { 00:05:33.262 "mask": "0x200", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "thread": { 00:05:33.262 "mask": "0x400", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "nvme_pcie": { 00:05:33.262 "mask": "0x800", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "iaa": { 00:05:33.262 "mask": "0x1000", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "nvme_tcp": { 00:05:33.262 "mask": "0x2000", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "bdev_nvme": { 00:05:33.262 "mask": "0x4000", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "sock": { 00:05:33.262 "mask": "0x8000", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "blob": { 00:05:33.262 "mask": "0x10000", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "bdev_raid": { 00:05:33.262 "mask": "0x20000", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 }, 00:05:33.262 "scheduler": { 00:05:33.262 "mask": "0x40000", 00:05:33.262 "tpoint_mask": "0x0" 00:05:33.262 } 00:05:33.262 }' 00:05:33.262 10:28:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:33.520 10:28:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:33.520 10:28:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:33.520 10:28:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:33.520 10:28:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:33.520 10:28:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:33.520 10:28:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:33.520 10:28:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:33.520 10:28:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:33.520 10:28:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:33.520 00:05:33.520 real 0m0.265s 00:05:33.520 user 0m0.232s 00:05:33.520 sys 0m0.023s 00:05:33.520 10:28:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.520 10:28:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.520 ************************************ 00:05:33.520 END TEST rpc_trace_cmd_test 00:05:33.520 ************************************ 00:05:33.779 10:28:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:33.779 10:28:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:33.779 10:28:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:33.779 10:28:59 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.779 10:28:59 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.779 10:28:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.779 ************************************ 00:05:33.779 START TEST rpc_daemon_integrity 00:05:33.779 ************************************ 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.779 { 00:05:33.779 "name": "Malloc2", 00:05:33.779 "aliases": [ 00:05:33.779 "f2eb4e82-b07d-48b6-8d26-004146e04a28" 00:05:33.779 ], 00:05:33.779 "product_name": "Malloc disk", 00:05:33.779 "block_size": 512, 00:05:33.779 "num_blocks": 16384, 00:05:33.779 "uuid": "f2eb4e82-b07d-48b6-8d26-004146e04a28", 00:05:33.779 "assigned_rate_limits": { 00:05:33.779 "rw_ios_per_sec": 0, 00:05:33.779 "rw_mbytes_per_sec": 0, 00:05:33.779 "r_mbytes_per_sec": 0, 00:05:33.779 "w_mbytes_per_sec": 0 00:05:33.779 }, 00:05:33.779 "claimed": false, 00:05:33.779 "zoned": false, 00:05:33.779 "supported_io_types": { 00:05:33.779 "read": true, 00:05:33.779 "write": true, 00:05:33.779 "unmap": true, 00:05:33.779 "flush": true, 00:05:33.779 "reset": true, 00:05:33.779 "nvme_admin": false, 00:05:33.779 "nvme_io": false, 00:05:33.779 "nvme_io_md": false, 00:05:33.779 "write_zeroes": true, 00:05:33.779 "zcopy": true, 00:05:33.779 "get_zone_info": false, 00:05:33.779 "zone_management": false, 00:05:33.779 "zone_append": false, 00:05:33.779 "compare": false, 00:05:33.779 "compare_and_write": false, 00:05:33.779 "abort": true, 00:05:33.779 "seek_hole": false, 00:05:33.779 "seek_data": false, 00:05:33.779 "copy": true, 00:05:33.779 "nvme_iov_md": false 00:05:33.779 }, 00:05:33.779 "memory_domains": [ 00:05:33.779 { 00:05:33.779 "dma_device_id": "system", 00:05:33.779 "dma_device_type": 1 00:05:33.779 }, 00:05:33.779 { 00:05:33.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.779 "dma_device_type": 2 00:05:33.779 } 00:05:33.779 ], 00:05:33.779 "driver_specific": {} 00:05:33.779 } 00:05:33.779 ]' 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.779 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.779 [2024-11-15 10:28:59.193328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:33.780 [2024-11-15 10:28:59.193384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.780 [2024-11-15 10:28:59.193404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16901e0 00:05:33.780 [2024-11-15 10:28:59.193414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.780 [2024-11-15 10:28:59.195319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.780 [2024-11-15 10:28:59.195369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.780 Passthru0 00:05:33.780 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.780 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.780 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.780 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.780 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.780 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.780 { 00:05:33.780 "name": "Malloc2", 00:05:33.780 "aliases": [ 00:05:33.780 "f2eb4e82-b07d-48b6-8d26-004146e04a28" 00:05:33.780 ], 00:05:33.780 "product_name": "Malloc disk", 00:05:33.780 "block_size": 512, 00:05:33.780 "num_blocks": 16384, 00:05:33.780 "uuid": "f2eb4e82-b07d-48b6-8d26-004146e04a28", 00:05:33.780 "assigned_rate_limits": { 00:05:33.780 "rw_ios_per_sec": 0, 00:05:33.780 "rw_mbytes_per_sec": 0, 00:05:33.780 "r_mbytes_per_sec": 0, 00:05:33.780 "w_mbytes_per_sec": 0 00:05:33.780 }, 00:05:33.780 "claimed": true, 00:05:33.780 "claim_type": "exclusive_write", 00:05:33.780 "zoned": false, 00:05:33.780 "supported_io_types": { 00:05:33.780 "read": true, 00:05:33.780 "write": true, 00:05:33.780 "unmap": true, 00:05:33.780 "flush": true, 00:05:33.780 "reset": true, 00:05:33.780 "nvme_admin": false, 00:05:33.780 "nvme_io": false, 00:05:33.780 "nvme_io_md": false, 00:05:33.780 "write_zeroes": true, 00:05:33.780 "zcopy": true, 00:05:33.780 "get_zone_info": false, 00:05:33.780 "zone_management": false, 00:05:33.780 "zone_append": false, 00:05:33.780 "compare": false, 00:05:33.780 "compare_and_write": false, 00:05:33.780 "abort": true, 00:05:33.780 "seek_hole": false, 00:05:33.780 "seek_data": false, 00:05:33.780 "copy": true, 00:05:33.780 "nvme_iov_md": false 00:05:33.780 }, 00:05:33.780 "memory_domains": [ 00:05:33.780 { 00:05:33.780 "dma_device_id": "system", 00:05:33.780 "dma_device_type": 1 00:05:33.780 }, 00:05:33.780 { 00:05:33.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.780 "dma_device_type": 2 00:05:33.780 } 00:05:33.780 ], 00:05:33.780 "driver_specific": {} 00:05:33.780 }, 00:05:33.780 { 00:05:33.780 "name": "Passthru0", 00:05:33.780 "aliases": [ 00:05:33.780 "4aebbbf1-0654-5b2a-9979-992095e68ce1" 00:05:33.780 ], 00:05:33.780 "product_name": "passthru", 00:05:33.780 "block_size": 512, 00:05:33.780 "num_blocks": 16384, 00:05:33.780 "uuid": "4aebbbf1-0654-5b2a-9979-992095e68ce1", 00:05:33.780 "assigned_rate_limits": { 00:05:33.780 "rw_ios_per_sec": 0, 00:05:33.780 "rw_mbytes_per_sec": 0, 00:05:33.780 "r_mbytes_per_sec": 0, 00:05:33.780 "w_mbytes_per_sec": 0 00:05:33.780 }, 00:05:33.780 "claimed": false, 00:05:33.780 "zoned": false, 00:05:33.780 "supported_io_types": { 00:05:33.780 "read": true, 00:05:33.780 "write": true, 00:05:33.780 "unmap": true, 00:05:33.780 "flush": true, 00:05:33.780 "reset": true, 00:05:33.780 "nvme_admin": false, 00:05:33.780 "nvme_io": false, 00:05:33.780 "nvme_io_md": false, 00:05:33.780 "write_zeroes": true, 00:05:33.780 "zcopy": true, 00:05:33.780 "get_zone_info": false, 00:05:33.780 "zone_management": false, 00:05:33.780 "zone_append": false, 00:05:33.780 "compare": false, 00:05:33.780 "compare_and_write": false, 00:05:33.780 "abort": true, 00:05:33.780 "seek_hole": false, 00:05:33.780 "seek_data": false, 00:05:33.780 "copy": true, 00:05:33.780 "nvme_iov_md": false 00:05:33.780 }, 00:05:33.780 "memory_domains": [ 00:05:33.780 { 00:05:33.780 "dma_device_id": "system", 00:05:33.780 "dma_device_type": 1 00:05:33.780 }, 00:05:33.780 { 00:05:33.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.780 "dma_device_type": 2 00:05:33.780 } 00:05:33.780 ], 00:05:33.780 "driver_specific": { 00:05:33.780 "passthru": { 00:05:33.780 "name": "Passthru0", 00:05:33.780 "base_bdev_name": "Malloc2" 00:05:33.780 } 00:05:33.780 } 00:05:33.780 } 00:05:33.780 ]' 00:05:33.780 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:34.039 00:05:34.039 real 0m0.359s 00:05:34.039 user 0m0.249s 00:05:34.039 sys 0m0.039s 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.039 ************************************ 00:05:34.039 END TEST rpc_daemon_integrity 00:05:34.039 ************************************ 00:05:34.039 10:28:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.039 10:28:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:34.039 10:28:59 rpc -- rpc/rpc.sh@84 -- # killprocess 56775 00:05:34.039 10:28:59 rpc -- common/autotest_common.sh@952 -- # '[' -z 56775 ']' 00:05:34.039 10:28:59 rpc -- common/autotest_common.sh@956 -- # kill -0 56775 00:05:34.039 10:28:59 rpc -- common/autotest_common.sh@957 -- # uname 00:05:34.039 10:28:59 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:34.039 10:28:59 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56775 00:05:34.039 10:28:59 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:34.039 10:28:59 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:34.039 killing process with pid 56775 00:05:34.039 10:28:59 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56775' 00:05:34.039 10:28:59 rpc -- common/autotest_common.sh@971 -- # kill 56775 00:05:34.039 10:28:59 rpc -- common/autotest_common.sh@976 -- # wait 56775 00:05:34.648 00:05:34.648 real 0m2.630s 00:05:34.648 user 0m3.340s 00:05:34.648 sys 0m0.719s 00:05:34.648 10:28:59 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.648 10:28:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.648 ************************************ 00:05:34.648 END TEST rpc 00:05:34.648 ************************************ 00:05:34.648 10:28:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:34.648 10:28:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:34.648 10:28:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:34.648 10:28:59 -- common/autotest_common.sh@10 -- # set +x 00:05:34.648 ************************************ 00:05:34.648 START TEST skip_rpc 00:05:34.648 ************************************ 00:05:34.648 10:28:59 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:34.648 * Looking for test storage... 00:05:34.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.648 10:28:59 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:34.648 10:28:59 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:34.648 10:28:59 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:34.648 10:29:00 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.648 10:29:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:34.648 10:29:00 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.648 10:29:00 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:34.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.648 --rc genhtml_branch_coverage=1 00:05:34.648 --rc genhtml_function_coverage=1 00:05:34.648 --rc genhtml_legend=1 00:05:34.648 --rc geninfo_all_blocks=1 00:05:34.648 --rc geninfo_unexecuted_blocks=1 00:05:34.648 00:05:34.648 ' 00:05:34.648 10:29:00 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:34.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.648 --rc genhtml_branch_coverage=1 00:05:34.648 --rc genhtml_function_coverage=1 00:05:34.648 --rc genhtml_legend=1 00:05:34.648 --rc geninfo_all_blocks=1 00:05:34.648 --rc geninfo_unexecuted_blocks=1 00:05:34.648 00:05:34.648 ' 00:05:34.648 10:29:00 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:34.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.648 --rc genhtml_branch_coverage=1 00:05:34.648 --rc genhtml_function_coverage=1 00:05:34.648 --rc genhtml_legend=1 00:05:34.648 --rc geninfo_all_blocks=1 00:05:34.648 --rc geninfo_unexecuted_blocks=1 00:05:34.648 00:05:34.648 ' 00:05:34.648 10:29:00 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:34.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.648 --rc genhtml_branch_coverage=1 00:05:34.648 --rc genhtml_function_coverage=1 00:05:34.648 --rc genhtml_legend=1 00:05:34.648 --rc geninfo_all_blocks=1 00:05:34.648 --rc geninfo_unexecuted_blocks=1 00:05:34.648 00:05:34.648 ' 00:05:34.648 10:29:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:34.648 10:29:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:34.648 10:29:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:34.648 10:29:00 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:34.648 10:29:00 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:34.648 10:29:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.648 ************************************ 00:05:34.648 START TEST skip_rpc 00:05:34.648 ************************************ 00:05:34.648 10:29:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:34.648 10:29:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56973 00:05:34.648 10:29:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:34.648 10:29:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.648 10:29:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:34.907 [2024-11-15 10:29:00.186185] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:34.907 [2024-11-15 10:29:00.186328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56973 ] 00:05:34.907 [2024-11-15 10:29:00.339177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.165 [2024-11-15 10:29:00.423749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.165 [2024-11-15 10:29:00.499967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56973 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56973 ']' 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56973 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56973 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:40.443 killing process with pid 56973 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56973' 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56973 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56973 00:05:40.443 00:05:40.443 real 0m5.428s 00:05:40.443 user 0m5.049s 00:05:40.443 sys 0m0.289s 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:40.443 10:29:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.443 ************************************ 00:05:40.443 END TEST skip_rpc 00:05:40.443 ************************************ 00:05:40.443 10:29:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:40.443 10:29:05 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:40.443 10:29:05 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:40.443 10:29:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.443 ************************************ 00:05:40.443 START TEST skip_rpc_with_json 00:05:40.443 ************************************ 00:05:40.443 10:29:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:40.443 10:29:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:40.443 10:29:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57061 00:05:40.443 10:29:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.443 10:29:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.443 10:29:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57061 00:05:40.443 10:29:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57061 ']' 00:05:40.444 10:29:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.444 10:29:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.444 10:29:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.444 10:29:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.444 10:29:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.444 [2024-11-15 10:29:05.648657] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:40.444 [2024-11-15 10:29:05.648771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57061 ] 00:05:40.444 [2024-11-15 10:29:05.797616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.444 [2024-11-15 10:29:05.860238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.444 [2024-11-15 10:29:05.935162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.702 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.702 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:40.702 10:29:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:40.702 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.702 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.702 [2024-11-15 10:29:06.142885] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:40.702 request: 00:05:40.702 { 00:05:40.702 "trtype": "tcp", 00:05:40.702 "method": "nvmf_get_transports", 00:05:40.702 "req_id": 1 00:05:40.702 } 00:05:40.702 Got JSON-RPC error response 00:05:40.702 response: 00:05:40.702 { 00:05:40.702 "code": -19, 00:05:40.702 "message": "No such device" 00:05:40.702 } 00:05:40.702 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:40.702 10:29:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:40.702 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.702 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.702 [2024-11-15 10:29:06.154990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.702 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.702 10:29:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:40.702 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.702 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.961 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.961 10:29:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.961 { 00:05:40.961 "subsystems": [ 00:05:40.961 { 00:05:40.961 "subsystem": "fsdev", 00:05:40.961 "config": [ 00:05:40.961 { 00:05:40.961 "method": "fsdev_set_opts", 00:05:40.961 "params": { 00:05:40.961 "fsdev_io_pool_size": 65535, 00:05:40.961 "fsdev_io_cache_size": 256 00:05:40.961 } 00:05:40.961 } 00:05:40.961 ] 00:05:40.961 }, 00:05:40.961 { 00:05:40.961 "subsystem": "keyring", 00:05:40.961 "config": [] 00:05:40.961 }, 00:05:40.961 { 00:05:40.961 "subsystem": "iobuf", 00:05:40.961 "config": [ 00:05:40.961 { 00:05:40.961 "method": "iobuf_set_options", 00:05:40.961 "params": { 00:05:40.961 "small_pool_count": 8192, 00:05:40.961 "large_pool_count": 1024, 00:05:40.961 "small_bufsize": 8192, 00:05:40.961 "large_bufsize": 135168, 00:05:40.961 "enable_numa": false 00:05:40.961 } 00:05:40.961 } 00:05:40.961 ] 00:05:40.961 }, 00:05:40.961 { 00:05:40.961 "subsystem": "sock", 00:05:40.961 "config": [ 00:05:40.961 { 00:05:40.961 "method": "sock_set_default_impl", 00:05:40.961 "params": { 00:05:40.961 "impl_name": "uring" 00:05:40.961 } 00:05:40.961 }, 00:05:40.961 { 00:05:40.961 "method": "sock_impl_set_options", 00:05:40.961 "params": { 00:05:40.961 "impl_name": "ssl", 00:05:40.961 "recv_buf_size": 4096, 00:05:40.961 "send_buf_size": 4096, 00:05:40.961 "enable_recv_pipe": true, 00:05:40.961 "enable_quickack": false, 00:05:40.961 "enable_placement_id": 0, 00:05:40.961 "enable_zerocopy_send_server": true, 00:05:40.961 "enable_zerocopy_send_client": false, 00:05:40.961 "zerocopy_threshold": 0, 00:05:40.961 "tls_version": 0, 00:05:40.961 "enable_ktls": false 00:05:40.961 } 00:05:40.961 }, 00:05:40.961 { 00:05:40.961 "method": "sock_impl_set_options", 00:05:40.961 "params": { 00:05:40.961 "impl_name": "posix", 00:05:40.961 "recv_buf_size": 2097152, 00:05:40.961 "send_buf_size": 2097152, 00:05:40.961 "enable_recv_pipe": true, 00:05:40.961 "enable_quickack": false, 00:05:40.961 "enable_placement_id": 0, 00:05:40.961 "enable_zerocopy_send_server": true, 00:05:40.961 "enable_zerocopy_send_client": false, 00:05:40.961 "zerocopy_threshold": 0, 00:05:40.961 "tls_version": 0, 00:05:40.961 "enable_ktls": false 00:05:40.961 } 00:05:40.961 }, 00:05:40.961 { 00:05:40.961 "method": "sock_impl_set_options", 00:05:40.961 "params": { 00:05:40.961 "impl_name": "uring", 00:05:40.961 "recv_buf_size": 2097152, 00:05:40.961 "send_buf_size": 2097152, 00:05:40.961 "enable_recv_pipe": true, 00:05:40.961 "enable_quickack": false, 00:05:40.961 "enable_placement_id": 0, 00:05:40.961 "enable_zerocopy_send_server": false, 00:05:40.961 "enable_zerocopy_send_client": false, 00:05:40.961 "zerocopy_threshold": 0, 00:05:40.961 "tls_version": 0, 00:05:40.961 "enable_ktls": false 00:05:40.961 } 00:05:40.961 } 00:05:40.961 ] 00:05:40.961 }, 00:05:40.961 { 00:05:40.961 "subsystem": "vmd", 00:05:40.961 "config": [] 00:05:40.961 }, 00:05:40.961 { 00:05:40.961 "subsystem": "accel", 00:05:40.961 "config": [ 00:05:40.961 { 00:05:40.961 "method": "accel_set_options", 00:05:40.961 "params": { 00:05:40.961 "small_cache_size": 128, 00:05:40.961 "large_cache_size": 16, 00:05:40.961 "task_count": 2048, 00:05:40.961 "sequence_count": 2048, 00:05:40.961 "buf_count": 2048 00:05:40.961 } 00:05:40.961 } 00:05:40.961 ] 00:05:40.961 }, 00:05:40.961 { 00:05:40.961 "subsystem": "bdev", 00:05:40.961 "config": [ 00:05:40.961 { 00:05:40.961 "method": "bdev_set_options", 00:05:40.961 "params": { 00:05:40.961 "bdev_io_pool_size": 65535, 00:05:40.961 "bdev_io_cache_size": 256, 00:05:40.961 "bdev_auto_examine": true, 00:05:40.961 "iobuf_small_cache_size": 128, 00:05:40.961 "iobuf_large_cache_size": 16 00:05:40.962 } 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "method": "bdev_raid_set_options", 00:05:40.962 "params": { 00:05:40.962 "process_window_size_kb": 1024, 00:05:40.962 "process_max_bandwidth_mb_sec": 0 00:05:40.962 } 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "method": "bdev_iscsi_set_options", 00:05:40.962 "params": { 00:05:40.962 "timeout_sec": 30 00:05:40.962 } 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "method": "bdev_nvme_set_options", 00:05:40.962 "params": { 00:05:40.962 "action_on_timeout": "none", 00:05:40.962 "timeout_us": 0, 00:05:40.962 "timeout_admin_us": 0, 00:05:40.962 "keep_alive_timeout_ms": 10000, 00:05:40.962 "arbitration_burst": 0, 00:05:40.962 "low_priority_weight": 0, 00:05:40.962 "medium_priority_weight": 0, 00:05:40.962 "high_priority_weight": 0, 00:05:40.962 "nvme_adminq_poll_period_us": 10000, 00:05:40.962 "nvme_ioq_poll_period_us": 0, 00:05:40.962 "io_queue_requests": 0, 00:05:40.962 "delay_cmd_submit": true, 00:05:40.962 "transport_retry_count": 4, 00:05:40.962 "bdev_retry_count": 3, 00:05:40.962 "transport_ack_timeout": 0, 00:05:40.962 "ctrlr_loss_timeout_sec": 0, 00:05:40.962 "reconnect_delay_sec": 0, 00:05:40.962 "fast_io_fail_timeout_sec": 0, 00:05:40.962 "disable_auto_failback": false, 00:05:40.962 "generate_uuids": false, 00:05:40.962 "transport_tos": 0, 00:05:40.962 "nvme_error_stat": false, 00:05:40.962 "rdma_srq_size": 0, 00:05:40.962 "io_path_stat": false, 00:05:40.962 "allow_accel_sequence": false, 00:05:40.962 "rdma_max_cq_size": 0, 00:05:40.962 "rdma_cm_event_timeout_ms": 0, 00:05:40.962 "dhchap_digests": [ 00:05:40.962 "sha256", 00:05:40.962 "sha384", 00:05:40.962 "sha512" 00:05:40.962 ], 00:05:40.962 "dhchap_dhgroups": [ 00:05:40.962 "null", 00:05:40.962 "ffdhe2048", 00:05:40.962 "ffdhe3072", 00:05:40.962 "ffdhe4096", 00:05:40.962 "ffdhe6144", 00:05:40.962 "ffdhe8192" 00:05:40.962 ] 00:05:40.962 } 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "method": "bdev_nvme_set_hotplug", 00:05:40.962 "params": { 00:05:40.962 "period_us": 100000, 00:05:40.962 "enable": false 00:05:40.962 } 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "method": "bdev_wait_for_examine" 00:05:40.962 } 00:05:40.962 ] 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "subsystem": "scsi", 00:05:40.962 "config": null 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "subsystem": "scheduler", 00:05:40.962 "config": [ 00:05:40.962 { 00:05:40.962 "method": "framework_set_scheduler", 00:05:40.962 "params": { 00:05:40.962 "name": "static" 00:05:40.962 } 00:05:40.962 } 00:05:40.962 ] 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "subsystem": "vhost_scsi", 00:05:40.962 "config": [] 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "subsystem": "vhost_blk", 00:05:40.962 "config": [] 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "subsystem": "ublk", 00:05:40.962 "config": [] 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "subsystem": "nbd", 00:05:40.962 "config": [] 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "subsystem": "nvmf", 00:05:40.962 "config": [ 00:05:40.962 { 00:05:40.962 "method": "nvmf_set_config", 00:05:40.962 "params": { 00:05:40.962 "discovery_filter": "match_any", 00:05:40.962 "admin_cmd_passthru": { 00:05:40.962 "identify_ctrlr": false 00:05:40.962 }, 00:05:40.962 "dhchap_digests": [ 00:05:40.962 "sha256", 00:05:40.962 "sha384", 00:05:40.962 "sha512" 00:05:40.962 ], 00:05:40.962 "dhchap_dhgroups": [ 00:05:40.962 "null", 00:05:40.962 "ffdhe2048", 00:05:40.962 "ffdhe3072", 00:05:40.962 "ffdhe4096", 00:05:40.962 "ffdhe6144", 00:05:40.962 "ffdhe8192" 00:05:40.962 ] 00:05:40.962 } 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "method": "nvmf_set_max_subsystems", 00:05:40.962 "params": { 00:05:40.962 "max_subsystems": 1024 00:05:40.962 } 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "method": "nvmf_set_crdt", 00:05:40.962 "params": { 00:05:40.962 "crdt1": 0, 00:05:40.962 "crdt2": 0, 00:05:40.962 "crdt3": 0 00:05:40.962 } 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "method": "nvmf_create_transport", 00:05:40.962 "params": { 00:05:40.962 "trtype": "TCP", 00:05:40.962 "max_queue_depth": 128, 00:05:40.962 "max_io_qpairs_per_ctrlr": 127, 00:05:40.962 "in_capsule_data_size": 4096, 00:05:40.962 "max_io_size": 131072, 00:05:40.962 "io_unit_size": 131072, 00:05:40.962 "max_aq_depth": 128, 00:05:40.962 "num_shared_buffers": 511, 00:05:40.962 "buf_cache_size": 4294967295, 00:05:40.962 "dif_insert_or_strip": false, 00:05:40.962 "zcopy": false, 00:05:40.962 "c2h_success": true, 00:05:40.962 "sock_priority": 0, 00:05:40.962 "abort_timeout_sec": 1, 00:05:40.962 "ack_timeout": 0, 00:05:40.962 "data_wr_pool_size": 0 00:05:40.962 } 00:05:40.962 } 00:05:40.962 ] 00:05:40.962 }, 00:05:40.962 { 00:05:40.962 "subsystem": "iscsi", 00:05:40.962 "config": [ 00:05:40.962 { 00:05:40.962 "method": "iscsi_set_options", 00:05:40.962 "params": { 00:05:40.962 "node_base": "iqn.2016-06.io.spdk", 00:05:40.962 "max_sessions": 128, 00:05:40.962 "max_connections_per_session": 2, 00:05:40.962 "max_queue_depth": 64, 00:05:40.962 "default_time2wait": 2, 00:05:40.962 "default_time2retain": 20, 00:05:40.962 "first_burst_length": 8192, 00:05:40.962 "immediate_data": true, 00:05:40.962 "allow_duplicated_isid": false, 00:05:40.962 "error_recovery_level": 0, 00:05:40.962 "nop_timeout": 60, 00:05:40.962 "nop_in_interval": 30, 00:05:40.962 "disable_chap": false, 00:05:40.962 "require_chap": false, 00:05:40.962 "mutual_chap": false, 00:05:40.962 "chap_group": 0, 00:05:40.962 "max_large_datain_per_connection": 64, 00:05:40.962 "max_r2t_per_connection": 4, 00:05:40.962 "pdu_pool_size": 36864, 00:05:40.962 "immediate_data_pool_size": 16384, 00:05:40.962 "data_out_pool_size": 2048 00:05:40.962 } 00:05:40.962 } 00:05:40.962 ] 00:05:40.962 } 00:05:40.962 ] 00:05:40.962 } 00:05:40.962 10:29:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:40.962 10:29:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57061 00:05:40.962 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57061 ']' 00:05:40.962 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57061 00:05:40.962 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:40.962 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:40.962 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57061 00:05:40.962 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:40.962 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:40.962 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57061' 00:05:40.962 killing process with pid 57061 00:05:40.962 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57061 00:05:40.962 10:29:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57061 00:05:41.530 10:29:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:41.530 10:29:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57081 00:05:41.530 10:29:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:46.798 10:29:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57081 00:05:46.798 10:29:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57081 ']' 00:05:46.798 10:29:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57081 00:05:46.798 10:29:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:46.798 10:29:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:46.798 10:29:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57081 00:05:46.798 10:29:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:46.798 10:29:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:46.798 killing process with pid 57081 00:05:46.798 10:29:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57081' 00:05:46.798 10:29:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57081 00:05:46.798 10:29:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57081 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:46.798 00:05:46.798 real 0m6.628s 00:05:46.798 user 0m6.187s 00:05:46.798 sys 0m0.631s 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.798 ************************************ 00:05:46.798 END TEST skip_rpc_with_json 00:05:46.798 ************************************ 00:05:46.798 10:29:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:46.798 10:29:12 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.798 10:29:12 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.798 10:29:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.798 ************************************ 00:05:46.798 START TEST skip_rpc_with_delay 00:05:46.798 ************************************ 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:46.798 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.057 [2024-11-15 10:29:12.334948] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:47.057 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:47.057 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.057 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.057 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.057 00:05:47.057 real 0m0.095s 00:05:47.057 user 0m0.068s 00:05:47.057 sys 0m0.027s 00:05:47.057 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.057 10:29:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:47.057 ************************************ 00:05:47.057 END TEST skip_rpc_with_delay 00:05:47.057 ************************************ 00:05:47.057 10:29:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:47.057 10:29:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:47.057 10:29:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:47.057 10:29:12 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.057 10:29:12 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.057 10:29:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.057 ************************************ 00:05:47.057 START TEST exit_on_failed_rpc_init 00:05:47.057 ************************************ 00:05:47.057 10:29:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:47.057 10:29:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57190 00:05:47.057 10:29:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57190 00:05:47.057 10:29:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57190 ']' 00:05:47.057 10:29:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.057 10:29:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.057 10:29:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.057 10:29:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.057 10:29:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.057 10:29:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.057 [2024-11-15 10:29:12.484724] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:47.057 [2024-11-15 10:29:12.484818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57190 ] 00:05:47.316 [2024-11-15 10:29:12.624943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.316 [2024-11-15 10:29:12.677437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.316 [2024-11-15 10:29:12.749524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:48.253 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.253 [2024-11-15 10:29:13.568124] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:48.253 [2024-11-15 10:29:13.568251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57209 ] 00:05:48.253 [2024-11-15 10:29:13.721653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.512 [2024-11-15 10:29:13.786952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.512 [2024-11-15 10:29:13.787076] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:48.512 [2024-11-15 10:29:13.787094] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:48.512 [2024-11-15 10:29:13.787104] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57190 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57190 ']' 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57190 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57190 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:48.512 killing process with pid 57190 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57190' 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57190 00:05:48.512 10:29:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57190 00:05:49.080 00:05:49.080 real 0m1.872s 00:05:49.080 user 0m2.191s 00:05:49.080 sys 0m0.421s 00:05:49.080 10:29:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:49.080 10:29:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.080 ************************************ 00:05:49.080 END TEST exit_on_failed_rpc_init 00:05:49.080 ************************************ 00:05:49.080 10:29:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.080 00:05:49.080 real 0m14.431s 00:05:49.080 user 0m13.673s 00:05:49.080 sys 0m1.584s 00:05:49.080 10:29:14 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:49.080 10:29:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.080 ************************************ 00:05:49.080 END TEST skip_rpc 00:05:49.080 ************************************ 00:05:49.080 10:29:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:49.080 10:29:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:49.080 10:29:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:49.080 10:29:14 -- common/autotest_common.sh@10 -- # set +x 00:05:49.080 ************************************ 00:05:49.080 START TEST rpc_client 00:05:49.080 ************************************ 00:05:49.080 10:29:14 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:49.080 * Looking for test storage... 00:05:49.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:49.080 10:29:14 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:49.080 10:29:14 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:49.080 10:29:14 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:49.080 10:29:14 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.080 10:29:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:49.080 10:29:14 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.080 10:29:14 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:49.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.080 --rc genhtml_branch_coverage=1 00:05:49.080 --rc genhtml_function_coverage=1 00:05:49.080 --rc genhtml_legend=1 00:05:49.080 --rc geninfo_all_blocks=1 00:05:49.080 --rc geninfo_unexecuted_blocks=1 00:05:49.080 00:05:49.080 ' 00:05:49.080 10:29:14 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:49.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.080 --rc genhtml_branch_coverage=1 00:05:49.080 --rc genhtml_function_coverage=1 00:05:49.080 --rc genhtml_legend=1 00:05:49.080 --rc geninfo_all_blocks=1 00:05:49.080 --rc geninfo_unexecuted_blocks=1 00:05:49.080 00:05:49.080 ' 00:05:49.080 10:29:14 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:49.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.080 --rc genhtml_branch_coverage=1 00:05:49.080 --rc genhtml_function_coverage=1 00:05:49.080 --rc genhtml_legend=1 00:05:49.080 --rc geninfo_all_blocks=1 00:05:49.080 --rc geninfo_unexecuted_blocks=1 00:05:49.080 00:05:49.080 ' 00:05:49.080 10:29:14 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:49.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.080 --rc genhtml_branch_coverage=1 00:05:49.080 --rc genhtml_function_coverage=1 00:05:49.080 --rc genhtml_legend=1 00:05:49.080 --rc geninfo_all_blocks=1 00:05:49.080 --rc geninfo_unexecuted_blocks=1 00:05:49.080 00:05:49.080 ' 00:05:49.080 10:29:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:49.339 OK 00:05:49.339 10:29:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:49.339 00:05:49.339 real 0m0.204s 00:05:49.339 user 0m0.127s 00:05:49.339 sys 0m0.085s 00:05:49.339 10:29:14 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:49.339 10:29:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:49.339 ************************************ 00:05:49.339 END TEST rpc_client 00:05:49.339 ************************************ 00:05:49.339 10:29:14 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:49.339 10:29:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:49.339 10:29:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:49.339 10:29:14 -- common/autotest_common.sh@10 -- # set +x 00:05:49.339 ************************************ 00:05:49.339 START TEST json_config 00:05:49.339 ************************************ 00:05:49.339 10:29:14 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:49.339 10:29:14 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:49.339 10:29:14 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:49.339 10:29:14 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:49.339 10:29:14 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:49.339 10:29:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.339 10:29:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.339 10:29:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.339 10:29:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.339 10:29:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.339 10:29:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.339 10:29:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.339 10:29:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.339 10:29:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.339 10:29:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.339 10:29:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.339 10:29:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:49.339 10:29:14 json_config -- scripts/common.sh@345 -- # : 1 00:05:49.339 10:29:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.339 10:29:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.339 10:29:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:49.339 10:29:14 json_config -- scripts/common.sh@353 -- # local d=1 00:05:49.339 10:29:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.339 10:29:14 json_config -- scripts/common.sh@355 -- # echo 1 00:05:49.339 10:29:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.339 10:29:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:49.339 10:29:14 json_config -- scripts/common.sh@353 -- # local d=2 00:05:49.339 10:29:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.339 10:29:14 json_config -- scripts/common.sh@355 -- # echo 2 00:05:49.339 10:29:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.339 10:29:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.339 10:29:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.339 10:29:14 json_config -- scripts/common.sh@368 -- # return 0 00:05:49.339 10:29:14 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.339 10:29:14 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:49.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.339 --rc genhtml_branch_coverage=1 00:05:49.339 --rc genhtml_function_coverage=1 00:05:49.339 --rc genhtml_legend=1 00:05:49.339 --rc geninfo_all_blocks=1 00:05:49.339 --rc geninfo_unexecuted_blocks=1 00:05:49.339 00:05:49.339 ' 00:05:49.339 10:29:14 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:49.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.339 --rc genhtml_branch_coverage=1 00:05:49.339 --rc genhtml_function_coverage=1 00:05:49.339 --rc genhtml_legend=1 00:05:49.339 --rc geninfo_all_blocks=1 00:05:49.339 --rc geninfo_unexecuted_blocks=1 00:05:49.339 00:05:49.339 ' 00:05:49.339 10:29:14 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:49.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.339 --rc genhtml_branch_coverage=1 00:05:49.339 --rc genhtml_function_coverage=1 00:05:49.339 --rc genhtml_legend=1 00:05:49.339 --rc geninfo_all_blocks=1 00:05:49.339 --rc geninfo_unexecuted_blocks=1 00:05:49.339 00:05:49.339 ' 00:05:49.339 10:29:14 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:49.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.339 --rc genhtml_branch_coverage=1 00:05:49.339 --rc genhtml_function_coverage=1 00:05:49.339 --rc genhtml_legend=1 00:05:49.339 --rc geninfo_all_blocks=1 00:05:49.339 --rc geninfo_unexecuted_blocks=1 00:05:49.339 00:05:49.339 ' 00:05:49.339 10:29:14 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:49.339 10:29:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:49.339 10:29:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.339 10:29:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.339 10:29:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.339 10:29:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.339 10:29:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.339 10:29:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.340 10:29:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.340 10:29:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.340 10:29:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.340 10:29:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.340 10:29:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:05:49.340 10:29:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:05:49.340 10:29:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.340 10:29:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.340 10:29:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.340 10:29:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.340 10:29:14 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:49.599 10:29:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:49.599 10:29:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.599 10:29:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.599 10:29:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.599 10:29:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.599 10:29:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.599 10:29:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.599 10:29:14 json_config -- paths/export.sh@5 -- # export PATH 00:05:49.599 10:29:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.599 10:29:14 json_config -- nvmf/common.sh@51 -- # : 0 00:05:49.599 10:29:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:49.599 10:29:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:49.599 10:29:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.599 10:29:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.599 10:29:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.599 10:29:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:49.599 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:49.599 10:29:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:49.599 10:29:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:49.599 10:29:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:49.599 10:29:14 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:49.599 10:29:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:49.599 10:29:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:49.599 10:29:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:49.600 INFO: JSON configuration test init 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:49.600 10:29:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.600 10:29:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:49.600 10:29:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.600 10:29:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.600 Waiting for target to run... 00:05:49.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.600 10:29:14 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:49.600 10:29:14 json_config -- json_config/common.sh@9 -- # local app=target 00:05:49.600 10:29:14 json_config -- json_config/common.sh@10 -- # shift 00:05:49.600 10:29:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:49.600 10:29:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:49.600 10:29:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:49.600 10:29:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.600 10:29:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.600 10:29:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57348 00:05:49.600 10:29:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:49.600 10:29:14 json_config -- json_config/common.sh@25 -- # waitforlisten 57348 /var/tmp/spdk_tgt.sock 00:05:49.600 10:29:14 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:49.600 10:29:14 json_config -- common/autotest_common.sh@833 -- # '[' -z 57348 ']' 00:05:49.600 10:29:14 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.600 10:29:14 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:49.600 10:29:14 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.600 10:29:14 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:49.600 10:29:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.600 [2024-11-15 10:29:14.929413] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:49.600 [2024-11-15 10:29:14.930061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57348 ] 00:05:50.168 [2024-11-15 10:29:15.360953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.168 [2024-11-15 10:29:15.418074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.734 00:05:50.734 10:29:15 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:50.734 10:29:15 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:50.734 10:29:15 json_config -- json_config/common.sh@26 -- # echo '' 00:05:50.734 10:29:15 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:50.734 10:29:15 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:50.734 10:29:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:50.734 10:29:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.734 10:29:15 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:50.734 10:29:15 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:50.734 10:29:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:50.734 10:29:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.734 10:29:16 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:50.734 10:29:16 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:50.734 10:29:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:50.993 [2024-11-15 10:29:16.318620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.253 10:29:16 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:51.253 10:29:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:51.253 10:29:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:51.253 10:29:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.253 10:29:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:51.253 10:29:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:51.253 10:29:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:51.253 10:29:16 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:51.253 10:29:16 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:51.253 10:29:16 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:51.253 10:29:16 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:51.253 10:29:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@54 -- # sort 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:51.512 10:29:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:51.512 10:29:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:51.512 10:29:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:51.512 10:29:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:51.512 10:29:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:51.512 10:29:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:51.772 MallocForNvmf0 00:05:51.772 10:29:17 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:51.772 10:29:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:52.031 MallocForNvmf1 00:05:52.031 10:29:17 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:52.031 10:29:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:52.290 [2024-11-15 10:29:17.709639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.290 10:29:17 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:52.290 10:29:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:52.549 10:29:17 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:52.549 10:29:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:52.808 10:29:18 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:52.808 10:29:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:53.375 10:29:18 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:53.375 10:29:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:53.375 [2024-11-15 10:29:18.810418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:53.375 10:29:18 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:53.375 10:29:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:53.375 10:29:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.375 10:29:18 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:53.375 10:29:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:53.375 10:29:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.634 10:29:18 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:53.634 10:29:18 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:53.634 10:29:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:53.892 MallocBdevForConfigChangeCheck 00:05:53.892 10:29:19 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:53.892 10:29:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:53.892 10:29:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.892 10:29:19 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:53.892 10:29:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.460 INFO: shutting down applications... 00:05:54.460 10:29:19 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:54.460 10:29:19 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:54.460 10:29:19 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:54.460 10:29:19 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:54.460 10:29:19 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:54.718 Calling clear_iscsi_subsystem 00:05:54.718 Calling clear_nvmf_subsystem 00:05:54.718 Calling clear_nbd_subsystem 00:05:54.718 Calling clear_ublk_subsystem 00:05:54.718 Calling clear_vhost_blk_subsystem 00:05:54.718 Calling clear_vhost_scsi_subsystem 00:05:54.718 Calling clear_bdev_subsystem 00:05:54.718 10:29:20 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:54.718 10:29:20 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:54.718 10:29:20 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:54.718 10:29:20 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.718 10:29:20 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:54.718 10:29:20 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:55.288 10:29:20 json_config -- json_config/json_config.sh@352 -- # break 00:05:55.288 10:29:20 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:55.288 10:29:20 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:55.288 10:29:20 json_config -- json_config/common.sh@31 -- # local app=target 00:05:55.288 10:29:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:55.288 10:29:20 json_config -- json_config/common.sh@35 -- # [[ -n 57348 ]] 00:05:55.288 10:29:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57348 00:05:55.288 10:29:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:55.288 10:29:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:55.288 10:29:20 json_config -- json_config/common.sh@41 -- # kill -0 57348 00:05:55.288 10:29:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:55.860 10:29:21 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:55.860 10:29:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:55.860 10:29:21 json_config -- json_config/common.sh@41 -- # kill -0 57348 00:05:55.860 10:29:21 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:55.860 10:29:21 json_config -- json_config/common.sh@43 -- # break 00:05:55.860 10:29:21 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:55.860 10:29:21 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:55.860 SPDK target shutdown done 00:05:55.860 10:29:21 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:55.860 INFO: relaunching applications... 00:05:55.860 10:29:21 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:55.860 10:29:21 json_config -- json_config/common.sh@9 -- # local app=target 00:05:55.860 10:29:21 json_config -- json_config/common.sh@10 -- # shift 00:05:55.860 10:29:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:55.860 10:29:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:55.860 10:29:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:55.860 10:29:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.860 10:29:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.860 10:29:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57549 00:05:55.860 10:29:21 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:55.860 Waiting for target to run... 00:05:55.860 10:29:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:55.860 10:29:21 json_config -- json_config/common.sh@25 -- # waitforlisten 57549 /var/tmp/spdk_tgt.sock 00:05:55.860 10:29:21 json_config -- common/autotest_common.sh@833 -- # '[' -z 57549 ']' 00:05:55.860 10:29:21 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:55.860 10:29:21 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:55.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:55.860 10:29:21 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:55.860 10:29:21 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:55.860 10:29:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.860 [2024-11-15 10:29:21.191362] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:55.860 [2024-11-15 10:29:21.191508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57549 ] 00:05:56.427 [2024-11-15 10:29:21.643738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.427 [2024-11-15 10:29:21.696595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.427 [2024-11-15 10:29:21.834757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.686 [2024-11-15 10:29:22.054239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.686 [2024-11-15 10:29:22.086281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:56.945 10:29:22 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:56.945 10:29:22 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:56.945 00:05:56.945 10:29:22 json_config -- json_config/common.sh@26 -- # echo '' 00:05:56.945 10:29:22 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:56.945 INFO: Checking if target configuration is the same... 00:05:56.945 10:29:22 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:56.945 10:29:22 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.945 10:29:22 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:56.945 10:29:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.945 + '[' 2 -ne 2 ']' 00:05:56.945 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:56.945 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:56.945 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:56.945 +++ basename /dev/fd/62 00:05:56.945 ++ mktemp /tmp/62.XXX 00:05:56.945 + tmp_file_1=/tmp/62.BSp 00:05:56.945 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.945 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:56.945 + tmp_file_2=/tmp/spdk_tgt_config.json.24R 00:05:56.945 + ret=0 00:05:56.945 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:57.203 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:57.203 + diff -u /tmp/62.BSp /tmp/spdk_tgt_config.json.24R 00:05:57.461 INFO: JSON config files are the same 00:05:57.461 + echo 'INFO: JSON config files are the same' 00:05:57.461 + rm /tmp/62.BSp /tmp/spdk_tgt_config.json.24R 00:05:57.461 + exit 0 00:05:57.461 10:29:22 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:57.461 INFO: changing configuration and checking if this can be detected... 00:05:57.461 10:29:22 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:57.461 10:29:22 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:57.462 10:29:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:57.720 10:29:23 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:57.720 10:29:23 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:57.720 10:29:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:57.720 + '[' 2 -ne 2 ']' 00:05:57.720 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:57.720 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:57.720 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:57.720 +++ basename /dev/fd/62 00:05:57.720 ++ mktemp /tmp/62.XXX 00:05:57.720 + tmp_file_1=/tmp/62.4TV 00:05:57.720 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:57.720 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:57.720 + tmp_file_2=/tmp/spdk_tgt_config.json.LHl 00:05:57.720 + ret=0 00:05:57.720 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:57.978 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:58.237 + diff -u /tmp/62.4TV /tmp/spdk_tgt_config.json.LHl 00:05:58.237 + ret=1 00:05:58.237 + echo '=== Start of file: /tmp/62.4TV ===' 00:05:58.237 + cat /tmp/62.4TV 00:05:58.237 + echo '=== End of file: /tmp/62.4TV ===' 00:05:58.237 + echo '' 00:05:58.237 + echo '=== Start of file: /tmp/spdk_tgt_config.json.LHl ===' 00:05:58.237 + cat /tmp/spdk_tgt_config.json.LHl 00:05:58.237 + echo '=== End of file: /tmp/spdk_tgt_config.json.LHl ===' 00:05:58.237 + echo '' 00:05:58.237 + rm /tmp/62.4TV /tmp/spdk_tgt_config.json.LHl 00:05:58.237 + exit 1 00:05:58.237 INFO: configuration change detected. 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@324 -- # [[ -n 57549 ]] 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 10:29:23 json_config -- json_config/json_config.sh@330 -- # killprocess 57549 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@952 -- # '[' -z 57549 ']' 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@956 -- # kill -0 57549 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@957 -- # uname 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57549 00:05:58.237 killing process with pid 57549 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57549' 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@971 -- # kill 57549 00:05:58.237 10:29:23 json_config -- common/autotest_common.sh@976 -- # wait 57549 00:05:58.496 10:29:23 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:58.496 10:29:23 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:58.496 10:29:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:58.496 10:29:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.496 INFO: Success 00:05:58.496 10:29:23 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:58.496 10:29:23 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:58.496 00:05:58.496 real 0m9.246s 00:05:58.496 user 0m13.487s 00:05:58.496 sys 0m1.830s 00:05:58.496 ************************************ 00:05:58.496 END TEST json_config 00:05:58.496 ************************************ 00:05:58.496 10:29:23 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.496 10:29:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.496 10:29:23 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:58.496 10:29:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.496 10:29:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.496 10:29:23 -- common/autotest_common.sh@10 -- # set +x 00:05:58.496 ************************************ 00:05:58.496 START TEST json_config_extra_key 00:05:58.496 ************************************ 00:05:58.496 10:29:23 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:58.754 10:29:23 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:58.754 10:29:23 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:58.754 10:29:23 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:58.754 10:29:24 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:58.754 10:29:24 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.754 10:29:24 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.754 10:29:24 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.754 10:29:24 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.754 10:29:24 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.754 10:29:24 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.754 10:29:24 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:58.755 10:29:24 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.755 10:29:24 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:58.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.755 --rc genhtml_branch_coverage=1 00:05:58.755 --rc genhtml_function_coverage=1 00:05:58.755 --rc genhtml_legend=1 00:05:58.755 --rc geninfo_all_blocks=1 00:05:58.755 --rc geninfo_unexecuted_blocks=1 00:05:58.755 00:05:58.755 ' 00:05:58.755 10:29:24 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:58.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.755 --rc genhtml_branch_coverage=1 00:05:58.755 --rc genhtml_function_coverage=1 00:05:58.755 --rc genhtml_legend=1 00:05:58.755 --rc geninfo_all_blocks=1 00:05:58.755 --rc geninfo_unexecuted_blocks=1 00:05:58.755 00:05:58.755 ' 00:05:58.755 10:29:24 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:58.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.755 --rc genhtml_branch_coverage=1 00:05:58.755 --rc genhtml_function_coverage=1 00:05:58.755 --rc genhtml_legend=1 00:05:58.755 --rc geninfo_all_blocks=1 00:05:58.755 --rc geninfo_unexecuted_blocks=1 00:05:58.755 00:05:58.755 ' 00:05:58.755 10:29:24 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:58.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.755 --rc genhtml_branch_coverage=1 00:05:58.755 --rc genhtml_function_coverage=1 00:05:58.755 --rc genhtml_legend=1 00:05:58.755 --rc geninfo_all_blocks=1 00:05:58.755 --rc geninfo_unexecuted_blocks=1 00:05:58.755 00:05:58.755 ' 00:05:58.755 10:29:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.755 10:29:24 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.755 10:29:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.755 10:29:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.755 10:29:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.755 10:29:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:58.755 10:29:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:58.755 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:58.755 10:29:24 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:58.755 10:29:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:58.755 10:29:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:58.755 10:29:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:58.755 10:29:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:58.755 10:29:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:58.755 10:29:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:58.755 10:29:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:58.755 10:29:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:58.755 10:29:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:58.755 10:29:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:58.755 10:29:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:58.755 INFO: launching applications... 00:05:58.755 10:29:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:58.755 10:29:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:58.755 10:29:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:58.755 10:29:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:58.755 10:29:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:58.755 10:29:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:58.756 10:29:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.756 10:29:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.756 10:29:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57703 00:05:58.756 10:29:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:58.756 Waiting for target to run... 00:05:58.756 10:29:24 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:58.756 10:29:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57703 /var/tmp/spdk_tgt.sock 00:05:58.756 10:29:24 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57703 ']' 00:05:58.756 10:29:24 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.756 10:29:24 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:58.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.756 10:29:24 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.756 10:29:24 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:58.756 10:29:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:58.756 [2024-11-15 10:29:24.196607] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:58.756 [2024-11-15 10:29:24.197493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57703 ] 00:05:59.321 [2024-11-15 10:29:24.637627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.321 [2024-11-15 10:29:24.687590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.321 [2024-11-15 10:29:24.720177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.888 00:05:59.888 INFO: shutting down applications... 00:05:59.888 10:29:25 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:59.888 10:29:25 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:59.888 10:29:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:59.888 10:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:59.888 10:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:59.888 10:29:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:59.888 10:29:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:59.888 10:29:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57703 ]] 00:05:59.888 10:29:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57703 00:05:59.888 10:29:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:59.888 10:29:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.888 10:29:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57703 00:05:59.888 10:29:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:00.455 10:29:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:00.455 10:29:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.455 10:29:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57703 00:06:00.455 SPDK target shutdown done 00:06:00.455 Success 00:06:00.455 10:29:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:00.455 10:29:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:00.455 10:29:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:00.455 10:29:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:00.455 10:29:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:00.455 ************************************ 00:06:00.455 END TEST json_config_extra_key 00:06:00.455 ************************************ 00:06:00.455 00:06:00.455 real 0m1.843s 00:06:00.455 user 0m1.802s 00:06:00.455 sys 0m0.457s 00:06:00.455 10:29:25 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.455 10:29:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:00.455 10:29:25 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:00.455 10:29:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.455 10:29:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.455 10:29:25 -- common/autotest_common.sh@10 -- # set +x 00:06:00.455 ************************************ 00:06:00.455 START TEST alias_rpc 00:06:00.455 ************************************ 00:06:00.455 10:29:25 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:00.455 * Looking for test storage... 00:06:00.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:00.455 10:29:25 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.455 10:29:25 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.455 10:29:25 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.714 10:29:26 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.714 10:29:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:00.714 10:29:26 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.714 10:29:26 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.714 --rc genhtml_branch_coverage=1 00:06:00.714 --rc genhtml_function_coverage=1 00:06:00.714 --rc genhtml_legend=1 00:06:00.714 --rc geninfo_all_blocks=1 00:06:00.714 --rc geninfo_unexecuted_blocks=1 00:06:00.714 00:06:00.714 ' 00:06:00.714 10:29:26 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.714 --rc genhtml_branch_coverage=1 00:06:00.714 --rc genhtml_function_coverage=1 00:06:00.714 --rc genhtml_legend=1 00:06:00.714 --rc geninfo_all_blocks=1 00:06:00.714 --rc geninfo_unexecuted_blocks=1 00:06:00.714 00:06:00.714 ' 00:06:00.714 10:29:26 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.714 --rc genhtml_branch_coverage=1 00:06:00.714 --rc genhtml_function_coverage=1 00:06:00.714 --rc genhtml_legend=1 00:06:00.714 --rc geninfo_all_blocks=1 00:06:00.714 --rc geninfo_unexecuted_blocks=1 00:06:00.714 00:06:00.714 ' 00:06:00.714 10:29:26 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.714 --rc genhtml_branch_coverage=1 00:06:00.714 --rc genhtml_function_coverage=1 00:06:00.714 --rc genhtml_legend=1 00:06:00.714 --rc geninfo_all_blocks=1 00:06:00.714 --rc geninfo_unexecuted_blocks=1 00:06:00.714 00:06:00.714 ' 00:06:00.715 10:29:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:00.715 10:29:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57781 00:06:00.715 10:29:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.715 10:29:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57781 00:06:00.715 10:29:26 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57781 ']' 00:06:00.715 10:29:26 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.715 10:29:26 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.715 10:29:26 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.715 10:29:26 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.715 10:29:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.715 [2024-11-15 10:29:26.084299] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:00.715 [2024-11-15 10:29:26.084650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57781 ] 00:06:00.973 [2024-11-15 10:29:26.234048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.974 [2024-11-15 10:29:26.298350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.974 [2024-11-15 10:29:26.368827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.570 10:29:27 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.570 10:29:27 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:01.570 10:29:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:01.829 10:29:27 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57781 00:06:01.829 10:29:27 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57781 ']' 00:06:01.829 10:29:27 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57781 00:06:01.829 10:29:27 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:01.829 10:29:27 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.088 10:29:27 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57781 00:06:02.088 killing process with pid 57781 00:06:02.088 10:29:27 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.088 10:29:27 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.088 10:29:27 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57781' 00:06:02.088 10:29:27 alias_rpc -- common/autotest_common.sh@971 -- # kill 57781 00:06:02.088 10:29:27 alias_rpc -- common/autotest_common.sh@976 -- # wait 57781 00:06:02.347 ************************************ 00:06:02.347 END TEST alias_rpc 00:06:02.347 ************************************ 00:06:02.347 00:06:02.347 real 0m1.910s 00:06:02.347 user 0m2.147s 00:06:02.347 sys 0m0.446s 00:06:02.347 10:29:27 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.347 10:29:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.347 10:29:27 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:02.347 10:29:27 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:02.347 10:29:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:02.347 10:29:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.347 10:29:27 -- common/autotest_common.sh@10 -- # set +x 00:06:02.347 ************************************ 00:06:02.347 START TEST spdkcli_tcp 00:06:02.347 ************************************ 00:06:02.347 10:29:27 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:02.606 * Looking for test storage... 00:06:02.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:02.606 10:29:27 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:02.606 10:29:27 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:02.606 10:29:27 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:02.606 10:29:27 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.606 10:29:27 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:02.606 10:29:27 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.606 10:29:27 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:02.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.606 --rc genhtml_branch_coverage=1 00:06:02.606 --rc genhtml_function_coverage=1 00:06:02.606 --rc genhtml_legend=1 00:06:02.606 --rc geninfo_all_blocks=1 00:06:02.606 --rc geninfo_unexecuted_blocks=1 00:06:02.606 00:06:02.606 ' 00:06:02.606 10:29:27 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:02.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.606 --rc genhtml_branch_coverage=1 00:06:02.606 --rc genhtml_function_coverage=1 00:06:02.606 --rc genhtml_legend=1 00:06:02.606 --rc geninfo_all_blocks=1 00:06:02.606 --rc geninfo_unexecuted_blocks=1 00:06:02.606 00:06:02.606 ' 00:06:02.606 10:29:27 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:02.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.606 --rc genhtml_branch_coverage=1 00:06:02.606 --rc genhtml_function_coverage=1 00:06:02.606 --rc genhtml_legend=1 00:06:02.606 --rc geninfo_all_blocks=1 00:06:02.606 --rc geninfo_unexecuted_blocks=1 00:06:02.606 00:06:02.606 ' 00:06:02.606 10:29:27 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:02.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.606 --rc genhtml_branch_coverage=1 00:06:02.606 --rc genhtml_function_coverage=1 00:06:02.606 --rc genhtml_legend=1 00:06:02.606 --rc geninfo_all_blocks=1 00:06:02.606 --rc geninfo_unexecuted_blocks=1 00:06:02.606 00:06:02.606 ' 00:06:02.606 10:29:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:02.606 10:29:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:02.606 10:29:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:02.607 10:29:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:02.607 10:29:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:02.607 10:29:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:02.607 10:29:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:02.607 10:29:27 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:02.607 10:29:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.607 10:29:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57865 00:06:02.607 10:29:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57865 00:06:02.607 10:29:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:02.607 10:29:27 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57865 ']' 00:06:02.607 10:29:27 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.607 10:29:27 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:02.607 10:29:27 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.607 10:29:27 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:02.607 10:29:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.607 [2024-11-15 10:29:28.018629] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:02.607 [2024-11-15 10:29:28.018717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57865 ] 00:06:02.865 [2024-11-15 10:29:28.163313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.865 [2024-11-15 10:29:28.222548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.865 [2024-11-15 10:29:28.222552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.865 [2024-11-15 10:29:28.293541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.125 10:29:28 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:03.125 10:29:28 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:03.125 10:29:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57869 00:06:03.125 10:29:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:03.125 10:29:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:03.385 [ 00:06:03.385 "bdev_malloc_delete", 00:06:03.385 "bdev_malloc_create", 00:06:03.385 "bdev_null_resize", 00:06:03.385 "bdev_null_delete", 00:06:03.385 "bdev_null_create", 00:06:03.385 "bdev_nvme_cuse_unregister", 00:06:03.385 "bdev_nvme_cuse_register", 00:06:03.385 "bdev_opal_new_user", 00:06:03.385 "bdev_opal_set_lock_state", 00:06:03.385 "bdev_opal_delete", 00:06:03.385 "bdev_opal_get_info", 00:06:03.385 "bdev_opal_create", 00:06:03.385 "bdev_nvme_opal_revert", 00:06:03.385 "bdev_nvme_opal_init", 00:06:03.385 "bdev_nvme_send_cmd", 00:06:03.385 "bdev_nvme_set_keys", 00:06:03.385 "bdev_nvme_get_path_iostat", 00:06:03.385 "bdev_nvme_get_mdns_discovery_info", 00:06:03.385 "bdev_nvme_stop_mdns_discovery", 00:06:03.385 "bdev_nvme_start_mdns_discovery", 00:06:03.385 "bdev_nvme_set_multipath_policy", 00:06:03.385 "bdev_nvme_set_preferred_path", 00:06:03.385 "bdev_nvme_get_io_paths", 00:06:03.385 "bdev_nvme_remove_error_injection", 00:06:03.385 "bdev_nvme_add_error_injection", 00:06:03.385 "bdev_nvme_get_discovery_info", 00:06:03.385 "bdev_nvme_stop_discovery", 00:06:03.385 "bdev_nvme_start_discovery", 00:06:03.385 "bdev_nvme_get_controller_health_info", 00:06:03.385 "bdev_nvme_disable_controller", 00:06:03.385 "bdev_nvme_enable_controller", 00:06:03.385 "bdev_nvme_reset_controller", 00:06:03.385 "bdev_nvme_get_transport_statistics", 00:06:03.385 "bdev_nvme_apply_firmware", 00:06:03.385 "bdev_nvme_detach_controller", 00:06:03.385 "bdev_nvme_get_controllers", 00:06:03.385 "bdev_nvme_attach_controller", 00:06:03.385 "bdev_nvme_set_hotplug", 00:06:03.385 "bdev_nvme_set_options", 00:06:03.385 "bdev_passthru_delete", 00:06:03.385 "bdev_passthru_create", 00:06:03.385 "bdev_lvol_set_parent_bdev", 00:06:03.385 "bdev_lvol_set_parent", 00:06:03.385 "bdev_lvol_check_shallow_copy", 00:06:03.385 "bdev_lvol_start_shallow_copy", 00:06:03.385 "bdev_lvol_grow_lvstore", 00:06:03.385 "bdev_lvol_get_lvols", 00:06:03.385 "bdev_lvol_get_lvstores", 00:06:03.385 "bdev_lvol_delete", 00:06:03.385 "bdev_lvol_set_read_only", 00:06:03.385 "bdev_lvol_resize", 00:06:03.385 "bdev_lvol_decouple_parent", 00:06:03.385 "bdev_lvol_inflate", 00:06:03.385 "bdev_lvol_rename", 00:06:03.385 "bdev_lvol_clone_bdev", 00:06:03.385 "bdev_lvol_clone", 00:06:03.385 "bdev_lvol_snapshot", 00:06:03.385 "bdev_lvol_create", 00:06:03.385 "bdev_lvol_delete_lvstore", 00:06:03.385 "bdev_lvol_rename_lvstore", 00:06:03.385 "bdev_lvol_create_lvstore", 00:06:03.385 "bdev_raid_set_options", 00:06:03.385 "bdev_raid_remove_base_bdev", 00:06:03.385 "bdev_raid_add_base_bdev", 00:06:03.385 "bdev_raid_delete", 00:06:03.385 "bdev_raid_create", 00:06:03.385 "bdev_raid_get_bdevs", 00:06:03.385 "bdev_error_inject_error", 00:06:03.385 "bdev_error_delete", 00:06:03.385 "bdev_error_create", 00:06:03.385 "bdev_split_delete", 00:06:03.385 "bdev_split_create", 00:06:03.385 "bdev_delay_delete", 00:06:03.385 "bdev_delay_create", 00:06:03.385 "bdev_delay_update_latency", 00:06:03.385 "bdev_zone_block_delete", 00:06:03.385 "bdev_zone_block_create", 00:06:03.385 "blobfs_create", 00:06:03.385 "blobfs_detect", 00:06:03.385 "blobfs_set_cache_size", 00:06:03.385 "bdev_aio_delete", 00:06:03.385 "bdev_aio_rescan", 00:06:03.385 "bdev_aio_create", 00:06:03.385 "bdev_ftl_set_property", 00:06:03.385 "bdev_ftl_get_properties", 00:06:03.385 "bdev_ftl_get_stats", 00:06:03.385 "bdev_ftl_unmap", 00:06:03.385 "bdev_ftl_unload", 00:06:03.385 "bdev_ftl_delete", 00:06:03.385 "bdev_ftl_load", 00:06:03.385 "bdev_ftl_create", 00:06:03.385 "bdev_virtio_attach_controller", 00:06:03.385 "bdev_virtio_scsi_get_devices", 00:06:03.385 "bdev_virtio_detach_controller", 00:06:03.385 "bdev_virtio_blk_set_hotplug", 00:06:03.385 "bdev_iscsi_delete", 00:06:03.385 "bdev_iscsi_create", 00:06:03.385 "bdev_iscsi_set_options", 00:06:03.385 "bdev_uring_delete", 00:06:03.385 "bdev_uring_rescan", 00:06:03.385 "bdev_uring_create", 00:06:03.385 "accel_error_inject_error", 00:06:03.385 "ioat_scan_accel_module", 00:06:03.385 "dsa_scan_accel_module", 00:06:03.385 "iaa_scan_accel_module", 00:06:03.385 "keyring_file_remove_key", 00:06:03.385 "keyring_file_add_key", 00:06:03.385 "keyring_linux_set_options", 00:06:03.385 "fsdev_aio_delete", 00:06:03.385 "fsdev_aio_create", 00:06:03.385 "iscsi_get_histogram", 00:06:03.385 "iscsi_enable_histogram", 00:06:03.385 "iscsi_set_options", 00:06:03.385 "iscsi_get_auth_groups", 00:06:03.385 "iscsi_auth_group_remove_secret", 00:06:03.385 "iscsi_auth_group_add_secret", 00:06:03.385 "iscsi_delete_auth_group", 00:06:03.385 "iscsi_create_auth_group", 00:06:03.385 "iscsi_set_discovery_auth", 00:06:03.385 "iscsi_get_options", 00:06:03.385 "iscsi_target_node_request_logout", 00:06:03.385 "iscsi_target_node_set_redirect", 00:06:03.385 "iscsi_target_node_set_auth", 00:06:03.385 "iscsi_target_node_add_lun", 00:06:03.385 "iscsi_get_stats", 00:06:03.385 "iscsi_get_connections", 00:06:03.385 "iscsi_portal_group_set_auth", 00:06:03.385 "iscsi_start_portal_group", 00:06:03.385 "iscsi_delete_portal_group", 00:06:03.385 "iscsi_create_portal_group", 00:06:03.385 "iscsi_get_portal_groups", 00:06:03.385 "iscsi_delete_target_node", 00:06:03.385 "iscsi_target_node_remove_pg_ig_maps", 00:06:03.385 "iscsi_target_node_add_pg_ig_maps", 00:06:03.385 "iscsi_create_target_node", 00:06:03.385 "iscsi_get_target_nodes", 00:06:03.385 "iscsi_delete_initiator_group", 00:06:03.385 "iscsi_initiator_group_remove_initiators", 00:06:03.385 "iscsi_initiator_group_add_initiators", 00:06:03.385 "iscsi_create_initiator_group", 00:06:03.385 "iscsi_get_initiator_groups", 00:06:03.385 "nvmf_set_crdt", 00:06:03.385 "nvmf_set_config", 00:06:03.385 "nvmf_set_max_subsystems", 00:06:03.385 "nvmf_stop_mdns_prr", 00:06:03.385 "nvmf_publish_mdns_prr", 00:06:03.385 "nvmf_subsystem_get_listeners", 00:06:03.385 "nvmf_subsystem_get_qpairs", 00:06:03.385 "nvmf_subsystem_get_controllers", 00:06:03.385 "nvmf_get_stats", 00:06:03.385 "nvmf_get_transports", 00:06:03.385 "nvmf_create_transport", 00:06:03.385 "nvmf_get_targets", 00:06:03.385 "nvmf_delete_target", 00:06:03.385 "nvmf_create_target", 00:06:03.385 "nvmf_subsystem_allow_any_host", 00:06:03.385 "nvmf_subsystem_set_keys", 00:06:03.385 "nvmf_subsystem_remove_host", 00:06:03.385 "nvmf_subsystem_add_host", 00:06:03.385 "nvmf_ns_remove_host", 00:06:03.385 "nvmf_ns_add_host", 00:06:03.385 "nvmf_subsystem_remove_ns", 00:06:03.385 "nvmf_subsystem_set_ns_ana_group", 00:06:03.385 "nvmf_subsystem_add_ns", 00:06:03.385 "nvmf_subsystem_listener_set_ana_state", 00:06:03.385 "nvmf_discovery_get_referrals", 00:06:03.385 "nvmf_discovery_remove_referral", 00:06:03.385 "nvmf_discovery_add_referral", 00:06:03.385 "nvmf_subsystem_remove_listener", 00:06:03.385 "nvmf_subsystem_add_listener", 00:06:03.385 "nvmf_delete_subsystem", 00:06:03.385 "nvmf_create_subsystem", 00:06:03.385 "nvmf_get_subsystems", 00:06:03.385 "env_dpdk_get_mem_stats", 00:06:03.385 "nbd_get_disks", 00:06:03.385 "nbd_stop_disk", 00:06:03.385 "nbd_start_disk", 00:06:03.385 "ublk_recover_disk", 00:06:03.385 "ublk_get_disks", 00:06:03.385 "ublk_stop_disk", 00:06:03.385 "ublk_start_disk", 00:06:03.385 "ublk_destroy_target", 00:06:03.385 "ublk_create_target", 00:06:03.385 "virtio_blk_create_transport", 00:06:03.385 "virtio_blk_get_transports", 00:06:03.385 "vhost_controller_set_coalescing", 00:06:03.385 "vhost_get_controllers", 00:06:03.385 "vhost_delete_controller", 00:06:03.385 "vhost_create_blk_controller", 00:06:03.385 "vhost_scsi_controller_remove_target", 00:06:03.385 "vhost_scsi_controller_add_target", 00:06:03.385 "vhost_start_scsi_controller", 00:06:03.385 "vhost_create_scsi_controller", 00:06:03.385 "thread_set_cpumask", 00:06:03.385 "scheduler_set_options", 00:06:03.385 "framework_get_governor", 00:06:03.385 "framework_get_scheduler", 00:06:03.385 "framework_set_scheduler", 00:06:03.385 "framework_get_reactors", 00:06:03.385 "thread_get_io_channels", 00:06:03.385 "thread_get_pollers", 00:06:03.385 "thread_get_stats", 00:06:03.385 "framework_monitor_context_switch", 00:06:03.385 "spdk_kill_instance", 00:06:03.385 "log_enable_timestamps", 00:06:03.385 "log_get_flags", 00:06:03.385 "log_clear_flag", 00:06:03.385 "log_set_flag", 00:06:03.385 "log_get_level", 00:06:03.385 "log_set_level", 00:06:03.385 "log_get_print_level", 00:06:03.385 "log_set_print_level", 00:06:03.385 "framework_enable_cpumask_locks", 00:06:03.385 "framework_disable_cpumask_locks", 00:06:03.385 "framework_wait_init", 00:06:03.385 "framework_start_init", 00:06:03.385 "scsi_get_devices", 00:06:03.385 "bdev_get_histogram", 00:06:03.385 "bdev_enable_histogram", 00:06:03.385 "bdev_set_qos_limit", 00:06:03.385 "bdev_set_qd_sampling_period", 00:06:03.385 "bdev_get_bdevs", 00:06:03.385 "bdev_reset_iostat", 00:06:03.385 "bdev_get_iostat", 00:06:03.386 "bdev_examine", 00:06:03.386 "bdev_wait_for_examine", 00:06:03.386 "bdev_set_options", 00:06:03.386 "accel_get_stats", 00:06:03.386 "accel_set_options", 00:06:03.386 "accel_set_driver", 00:06:03.386 "accel_crypto_key_destroy", 00:06:03.386 "accel_crypto_keys_get", 00:06:03.386 "accel_crypto_key_create", 00:06:03.386 "accel_assign_opc", 00:06:03.386 "accel_get_module_info", 00:06:03.386 "accel_get_opc_assignments", 00:06:03.386 "vmd_rescan", 00:06:03.386 "vmd_remove_device", 00:06:03.386 "vmd_enable", 00:06:03.386 "sock_get_default_impl", 00:06:03.386 "sock_set_default_impl", 00:06:03.386 "sock_impl_set_options", 00:06:03.386 "sock_impl_get_options", 00:06:03.386 "iobuf_get_stats", 00:06:03.386 "iobuf_set_options", 00:06:03.386 "keyring_get_keys", 00:06:03.386 "framework_get_pci_devices", 00:06:03.386 "framework_get_config", 00:06:03.386 "framework_get_subsystems", 00:06:03.386 "fsdev_set_opts", 00:06:03.386 "fsdev_get_opts", 00:06:03.386 "trace_get_info", 00:06:03.386 "trace_get_tpoint_group_mask", 00:06:03.386 "trace_disable_tpoint_group", 00:06:03.386 "trace_enable_tpoint_group", 00:06:03.386 "trace_clear_tpoint_mask", 00:06:03.386 "trace_set_tpoint_mask", 00:06:03.386 "notify_get_notifications", 00:06:03.386 "notify_get_types", 00:06:03.386 "spdk_get_version", 00:06:03.386 "rpc_get_methods" 00:06:03.386 ] 00:06:03.386 10:29:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:03.386 10:29:28 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:03.386 10:29:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.386 10:29:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:03.386 10:29:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57865 00:06:03.386 10:29:28 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57865 ']' 00:06:03.386 10:29:28 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57865 00:06:03.386 10:29:28 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:03.386 10:29:28 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:03.386 10:29:28 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57865 00:06:03.645 killing process with pid 57865 00:06:03.645 10:29:28 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:03.645 10:29:28 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:03.645 10:29:28 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57865' 00:06:03.645 10:29:28 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57865 00:06:03.645 10:29:28 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57865 00:06:03.904 ************************************ 00:06:03.904 END TEST spdkcli_tcp 00:06:03.904 ************************************ 00:06:03.904 00:06:03.904 real 0m1.485s 00:06:03.904 user 0m2.610s 00:06:03.904 sys 0m0.455s 00:06:03.904 10:29:29 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.904 10:29:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.904 10:29:29 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.904 10:29:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:03.904 10:29:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.904 10:29:29 -- common/autotest_common.sh@10 -- # set +x 00:06:03.904 ************************************ 00:06:03.904 START TEST dpdk_mem_utility 00:06:03.904 ************************************ 00:06:03.904 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.904 * Looking for test storage... 00:06:04.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:04.163 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:04.163 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:04.163 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:04.163 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:04.163 10:29:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.164 10:29:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:04.164 10:29:29 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.164 10:29:29 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:04.164 10:29:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:04.164 10:29:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.164 10:29:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:04.164 10:29:29 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.164 10:29:29 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.164 10:29:29 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.164 10:29:29 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:04.164 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.164 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:04.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.164 --rc genhtml_branch_coverage=1 00:06:04.164 --rc genhtml_function_coverage=1 00:06:04.164 --rc genhtml_legend=1 00:06:04.164 --rc geninfo_all_blocks=1 00:06:04.164 --rc geninfo_unexecuted_blocks=1 00:06:04.164 00:06:04.164 ' 00:06:04.164 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:04.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.164 --rc genhtml_branch_coverage=1 00:06:04.164 --rc genhtml_function_coverage=1 00:06:04.164 --rc genhtml_legend=1 00:06:04.164 --rc geninfo_all_blocks=1 00:06:04.164 --rc geninfo_unexecuted_blocks=1 00:06:04.164 00:06:04.164 ' 00:06:04.164 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:04.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.164 --rc genhtml_branch_coverage=1 00:06:04.164 --rc genhtml_function_coverage=1 00:06:04.164 --rc genhtml_legend=1 00:06:04.164 --rc geninfo_all_blocks=1 00:06:04.164 --rc geninfo_unexecuted_blocks=1 00:06:04.164 00:06:04.164 ' 00:06:04.164 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:04.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.164 --rc genhtml_branch_coverage=1 00:06:04.164 --rc genhtml_function_coverage=1 00:06:04.164 --rc genhtml_legend=1 00:06:04.164 --rc geninfo_all_blocks=1 00:06:04.164 --rc geninfo_unexecuted_blocks=1 00:06:04.164 00:06:04.164 ' 00:06:04.164 10:29:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:04.164 10:29:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57951 00:06:04.164 10:29:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.164 10:29:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57951 00:06:04.164 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57951 ']' 00:06:04.164 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.164 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:04.164 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.164 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:04.164 10:29:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.164 [2024-11-15 10:29:29.558965] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:04.164 [2024-11-15 10:29:29.559879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57951 ] 00:06:04.423 [2024-11-15 10:29:29.709555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.423 [2024-11-15 10:29:29.772734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.423 [2024-11-15 10:29:29.845341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.682 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:04.682 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:04.682 10:29:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:04.682 10:29:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:04.682 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.682 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.682 { 00:06:04.682 "filename": "/tmp/spdk_mem_dump.txt" 00:06:04.682 } 00:06:04.682 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.682 10:29:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:04.682 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:04.682 1 heaps totaling size 818.000000 MiB 00:06:04.682 size: 818.000000 MiB heap id: 0 00:06:04.682 end heaps---------- 00:06:04.682 9 mempools totaling size 603.782043 MiB 00:06:04.682 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:04.682 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:04.682 size: 100.555481 MiB name: bdev_io_57951 00:06:04.682 size: 50.003479 MiB name: msgpool_57951 00:06:04.682 size: 36.509338 MiB name: fsdev_io_57951 00:06:04.682 size: 21.763794 MiB name: PDU_Pool 00:06:04.682 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:04.682 size: 4.133484 MiB name: evtpool_57951 00:06:04.682 size: 0.026123 MiB name: Session_Pool 00:06:04.682 end mempools------- 00:06:04.682 6 memzones totaling size 4.142822 MiB 00:06:04.682 size: 1.000366 MiB name: RG_ring_0_57951 00:06:04.682 size: 1.000366 MiB name: RG_ring_1_57951 00:06:04.682 size: 1.000366 MiB name: RG_ring_4_57951 00:06:04.682 size: 1.000366 MiB name: RG_ring_5_57951 00:06:04.682 size: 0.125366 MiB name: RG_ring_2_57951 00:06:04.682 size: 0.015991 MiB name: RG_ring_3_57951 00:06:04.682 end memzones------- 00:06:04.682 10:29:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:04.943 heap id: 0 total size: 818.000000 MiB number of busy elements: 311 number of free elements: 15 00:06:04.943 list of free elements. size: 10.803589 MiB 00:06:04.943 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:04.943 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:04.943 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:04.943 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:04.943 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:04.943 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:04.943 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:04.943 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:04.943 element at address: 0x20001ae00000 with size: 0.568604 MiB 00:06:04.943 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:04.943 element at address: 0x200000c00000 with size: 0.486267 MiB 00:06:04.943 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:04.943 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:04.943 element at address: 0x200028200000 with size: 0.395935 MiB 00:06:04.943 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:04.943 list of standard malloc elements. size: 199.267517 MiB 00:06:04.943 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:04.943 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:04.943 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:04.943 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:04.943 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:04.943 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:04.943 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:04.943 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:04.943 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:04.943 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:04.943 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:04.943 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:04.943 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:04.944 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:04.944 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:04.944 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:04.944 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:06:04.944 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:04.945 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x200028265680 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826c280 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826c480 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826c540 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826c600 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826c780 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826c840 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826c900 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826d080 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826d140 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826d200 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826d380 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826d440 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826d500 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826d680 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826d740 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826d800 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826d980 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826da40 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826db00 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826de00 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826df80 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826e040 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826e100 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826e280 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826e340 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826e400 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826e580 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826e640 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826e700 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826e880 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826e940 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f000 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f180 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f240 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f300 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f480 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f540 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f600 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f780 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f840 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f900 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:04.945 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:04.945 list of memzone associated elements. size: 607.928894 MiB 00:06:04.945 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:04.945 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:04.945 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:04.945 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:04.945 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:04.945 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57951_0 00:06:04.945 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:04.945 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57951_0 00:06:04.945 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:04.945 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57951_0 00:06:04.945 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:04.945 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:04.945 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:04.945 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:04.945 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:04.945 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57951_0 00:06:04.945 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:04.945 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57951 00:06:04.945 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:04.945 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57951 00:06:04.945 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:04.945 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:04.945 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:04.945 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:04.945 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:04.945 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:04.945 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:04.945 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:04.945 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:04.945 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57951 00:06:04.945 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:04.945 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57951 00:06:04.945 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:04.945 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57951 00:06:04.946 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:04.946 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57951 00:06:04.946 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:04.946 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57951 00:06:04.946 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:04.946 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57951 00:06:04.946 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:04.946 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:04.946 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:04.946 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:04.946 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:04.946 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:04.946 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:04.946 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57951 00:06:04.946 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:04.946 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57951 00:06:04.946 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:04.946 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:04.946 element at address: 0x200028265740 with size: 0.023743 MiB 00:06:04.946 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:04.946 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:04.946 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57951 00:06:04.946 element at address: 0x20002826b880 with size: 0.002441 MiB 00:06:04.946 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:04.946 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:04.946 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57951 00:06:04.946 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:04.946 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57951 00:06:04.946 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:04.946 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57951 00:06:04.946 element at address: 0x20002826c340 with size: 0.000305 MiB 00:06:04.946 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:04.946 10:29:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:04.946 10:29:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57951 00:06:04.946 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57951 ']' 00:06:04.946 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57951 00:06:04.946 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:04.946 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:04.946 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57951 00:06:04.946 killing process with pid 57951 00:06:04.946 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:04.946 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:04.946 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57951' 00:06:04.946 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57951 00:06:04.946 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57951 00:06:05.205 00:06:05.205 real 0m1.327s 00:06:05.205 user 0m1.266s 00:06:05.205 sys 0m0.435s 00:06:05.205 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.205 ************************************ 00:06:05.205 END TEST dpdk_mem_utility 00:06:05.205 ************************************ 00:06:05.205 10:29:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.205 10:29:30 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:05.205 10:29:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:05.205 10:29:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.205 10:29:30 -- common/autotest_common.sh@10 -- # set +x 00:06:05.464 ************************************ 00:06:05.464 START TEST event 00:06:05.464 ************************************ 00:06:05.464 10:29:30 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:05.464 * Looking for test storage... 00:06:05.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:05.464 10:29:30 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.464 10:29:30 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.464 10:29:30 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.464 10:29:30 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.464 10:29:30 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.464 10:29:30 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.464 10:29:30 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.464 10:29:30 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.464 10:29:30 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.464 10:29:30 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.464 10:29:30 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.464 10:29:30 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.464 10:29:30 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.464 10:29:30 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.464 10:29:30 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.464 10:29:30 event -- scripts/common.sh@344 -- # case "$op" in 00:06:05.464 10:29:30 event -- scripts/common.sh@345 -- # : 1 00:06:05.464 10:29:30 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.464 10:29:30 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.464 10:29:30 event -- scripts/common.sh@365 -- # decimal 1 00:06:05.464 10:29:30 event -- scripts/common.sh@353 -- # local d=1 00:06:05.464 10:29:30 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.464 10:29:30 event -- scripts/common.sh@355 -- # echo 1 00:06:05.464 10:29:30 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.464 10:29:30 event -- scripts/common.sh@366 -- # decimal 2 00:06:05.464 10:29:30 event -- scripts/common.sh@353 -- # local d=2 00:06:05.464 10:29:30 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.464 10:29:30 event -- scripts/common.sh@355 -- # echo 2 00:06:05.464 10:29:30 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.464 10:29:30 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.464 10:29:30 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.465 10:29:30 event -- scripts/common.sh@368 -- # return 0 00:06:05.465 10:29:30 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.465 10:29:30 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.465 --rc genhtml_branch_coverage=1 00:06:05.465 --rc genhtml_function_coverage=1 00:06:05.465 --rc genhtml_legend=1 00:06:05.465 --rc geninfo_all_blocks=1 00:06:05.465 --rc geninfo_unexecuted_blocks=1 00:06:05.465 00:06:05.465 ' 00:06:05.465 10:29:30 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.465 --rc genhtml_branch_coverage=1 00:06:05.465 --rc genhtml_function_coverage=1 00:06:05.465 --rc genhtml_legend=1 00:06:05.465 --rc geninfo_all_blocks=1 00:06:05.465 --rc geninfo_unexecuted_blocks=1 00:06:05.465 00:06:05.465 ' 00:06:05.465 10:29:30 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.465 --rc genhtml_branch_coverage=1 00:06:05.465 --rc genhtml_function_coverage=1 00:06:05.465 --rc genhtml_legend=1 00:06:05.465 --rc geninfo_all_blocks=1 00:06:05.465 --rc geninfo_unexecuted_blocks=1 00:06:05.465 00:06:05.465 ' 00:06:05.465 10:29:30 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.465 --rc genhtml_branch_coverage=1 00:06:05.465 --rc genhtml_function_coverage=1 00:06:05.465 --rc genhtml_legend=1 00:06:05.465 --rc geninfo_all_blocks=1 00:06:05.465 --rc geninfo_unexecuted_blocks=1 00:06:05.465 00:06:05.465 ' 00:06:05.465 10:29:30 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:05.465 10:29:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:05.465 10:29:30 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.465 10:29:30 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:05.465 10:29:30 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.465 10:29:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.465 ************************************ 00:06:05.465 START TEST event_perf 00:06:05.465 ************************************ 00:06:05.465 10:29:30 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.465 Running I/O for 1 seconds...[2024-11-15 10:29:30.915966] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:05.465 [2024-11-15 10:29:30.916245] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58034 ] 00:06:05.723 [2024-11-15 10:29:31.072636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.723 [2024-11-15 10:29:31.145701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.723 [2024-11-15 10:29:31.145953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.724 [2024-11-15 10:29:31.145962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.724 Running I/O for 1 seconds...[2024-11-15 10:29:31.145831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.098 00:06:07.098 lcore 0: 180823 00:06:07.098 lcore 1: 180822 00:06:07.098 lcore 2: 180821 00:06:07.098 lcore 3: 180821 00:06:07.098 done. 00:06:07.098 00:06:07.098 real 0m1.305s 00:06:07.098 user 0m4.118s 00:06:07.098 sys 0m0.061s 00:06:07.098 10:29:32 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:07.098 10:29:32 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.098 ************************************ 00:06:07.098 END TEST event_perf 00:06:07.098 ************************************ 00:06:07.098 10:29:32 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:07.098 10:29:32 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:07.098 10:29:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.098 10:29:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.098 ************************************ 00:06:07.098 START TEST event_reactor 00:06:07.098 ************************************ 00:06:07.098 10:29:32 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:07.098 [2024-11-15 10:29:32.269847] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:07.098 [2024-11-15 10:29:32.269942] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58067 ] 00:06:07.098 [2024-11-15 10:29:32.421953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.098 [2024-11-15 10:29:32.490003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.117 test_start 00:06:08.117 oneshot 00:06:08.117 tick 100 00:06:08.117 tick 100 00:06:08.117 tick 250 00:06:08.117 tick 100 00:06:08.117 tick 100 00:06:08.117 tick 100 00:06:08.117 tick 250 00:06:08.117 tick 500 00:06:08.117 tick 100 00:06:08.117 tick 100 00:06:08.117 tick 250 00:06:08.117 tick 100 00:06:08.117 tick 100 00:06:08.117 test_end 00:06:08.117 ************************************ 00:06:08.117 END TEST event_reactor 00:06:08.117 ************************************ 00:06:08.117 00:06:08.117 real 0m1.290s 00:06:08.117 user 0m1.135s 00:06:08.117 sys 0m0.047s 00:06:08.117 10:29:33 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.117 10:29:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:08.117 10:29:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:08.117 10:29:33 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:08.117 10:29:33 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.117 10:29:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.117 ************************************ 00:06:08.117 START TEST event_reactor_perf 00:06:08.117 ************************************ 00:06:08.117 10:29:33 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:08.382 [2024-11-15 10:29:33.615818] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:08.382 [2024-11-15 10:29:33.615926] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58097 ] 00:06:08.382 [2024-11-15 10:29:33.759297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.382 [2024-11-15 10:29:33.816983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.762 test_start 00:06:09.762 test_end 00:06:09.762 Performance: 392041 events per second 00:06:09.762 00:06:09.762 real 0m1.263s 00:06:09.762 user 0m1.110s 00:06:09.762 sys 0m0.047s 00:06:09.762 10:29:34 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.762 10:29:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.762 ************************************ 00:06:09.762 END TEST event_reactor_perf 00:06:09.762 ************************************ 00:06:09.762 10:29:34 event -- event/event.sh@49 -- # uname -s 00:06:09.762 10:29:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:09.762 10:29:34 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:09.762 10:29:34 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:09.762 10:29:34 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:09.762 10:29:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.762 ************************************ 00:06:09.762 START TEST event_scheduler 00:06:09.762 ************************************ 00:06:09.762 10:29:34 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:09.762 * Looking for test storage... 00:06:09.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.762 10:29:35 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:09.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.762 --rc genhtml_branch_coverage=1 00:06:09.762 --rc genhtml_function_coverage=1 00:06:09.762 --rc genhtml_legend=1 00:06:09.762 --rc geninfo_all_blocks=1 00:06:09.762 --rc geninfo_unexecuted_blocks=1 00:06:09.762 00:06:09.762 ' 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:09.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.762 --rc genhtml_branch_coverage=1 00:06:09.762 --rc genhtml_function_coverage=1 00:06:09.762 --rc genhtml_legend=1 00:06:09.762 --rc geninfo_all_blocks=1 00:06:09.762 --rc geninfo_unexecuted_blocks=1 00:06:09.762 00:06:09.762 ' 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:09.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.762 --rc genhtml_branch_coverage=1 00:06:09.762 --rc genhtml_function_coverage=1 00:06:09.762 --rc genhtml_legend=1 00:06:09.762 --rc geninfo_all_blocks=1 00:06:09.762 --rc geninfo_unexecuted_blocks=1 00:06:09.762 00:06:09.762 ' 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:09.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.762 --rc genhtml_branch_coverage=1 00:06:09.762 --rc genhtml_function_coverage=1 00:06:09.762 --rc genhtml_legend=1 00:06:09.762 --rc geninfo_all_blocks=1 00:06:09.762 --rc geninfo_unexecuted_blocks=1 00:06:09.762 00:06:09.762 ' 00:06:09.762 10:29:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:09.762 10:29:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58172 00:06:09.762 10:29:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.762 10:29:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:09.762 10:29:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58172 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58172 ']' 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.762 10:29:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.762 [2024-11-15 10:29:35.172691] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:09.762 [2024-11-15 10:29:35.173010] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58172 ] 00:06:10.020 [2024-11-15 10:29:35.326337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.020 [2024-11-15 10:29:35.392552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.020 [2024-11-15 10:29:35.392651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.020 [2024-11-15 10:29:35.392814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.020 [2024-11-15 10:29:35.392821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.956 10:29:36 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:10.956 10:29:36 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:10.956 10:29:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:10.956 10:29:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:10.956 POWER: Cannot set governor of lcore 0 to userspace 00:06:10.956 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:10.956 POWER: Cannot set governor of lcore 0 to performance 00:06:10.956 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:10.956 POWER: Cannot set governor of lcore 0 to userspace 00:06:10.956 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:10.956 POWER: Cannot set governor of lcore 0 to userspace 00:06:10.956 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:10.956 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:10.956 POWER: Unable to set Power Management Environment for lcore 0 00:06:10.956 [2024-11-15 10:29:36.195200] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:10.956 [2024-11-15 10:29:36.195213] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:10.956 [2024-11-15 10:29:36.195227] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:10.956 [2024-11-15 10:29:36.195240] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:10.956 [2024-11-15 10:29:36.195248] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:10.956 [2024-11-15 10:29:36.195255] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:10.956 10:29:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 10:29:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:10.956 10:29:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 [2024-11-15 10:29:36.256672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.956 [2024-11-15 10:29:36.293641] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:10.956 10:29:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 10:29:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:10.956 10:29:36 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:10.956 10:29:36 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 ************************************ 00:06:10.956 START TEST scheduler_create_thread 00:06:10.956 ************************************ 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 2 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 3 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 4 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 5 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 6 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 7 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 8 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 9 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 10 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:10.956 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.957 10:29:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.871 10:29:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.871 10:29:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:12.871 10:29:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:12.871 10:29:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.871 10:29:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.438 ************************************ 00:06:13.438 END TEST scheduler_create_thread 00:06:13.438 ************************************ 00:06:13.438 10:29:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.438 00:06:13.438 real 0m2.615s 00:06:13.438 user 0m0.018s 00:06:13.438 sys 0m0.008s 00:06:13.438 10:29:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:13.438 10:29:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.697 10:29:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:13.697 10:29:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58172 00:06:13.697 10:29:38 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58172 ']' 00:06:13.697 10:29:38 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58172 00:06:13.697 10:29:38 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:13.697 10:29:38 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:13.697 10:29:38 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58172 00:06:13.697 killing process with pid 58172 00:06:13.697 10:29:39 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:13.697 10:29:39 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:13.697 10:29:39 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58172' 00:06:13.697 10:29:39 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58172 00:06:13.697 10:29:39 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58172 00:06:13.956 [2024-11-15 10:29:39.401995] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:14.215 00:06:14.215 real 0m4.693s 00:06:14.215 user 0m8.959s 00:06:14.215 sys 0m0.404s 00:06:14.215 10:29:39 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:14.215 10:29:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.215 ************************************ 00:06:14.215 END TEST event_scheduler 00:06:14.215 ************************************ 00:06:14.215 10:29:39 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:14.215 10:29:39 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:14.215 10:29:39 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:14.215 10:29:39 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.215 10:29:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.215 ************************************ 00:06:14.215 START TEST app_repeat 00:06:14.215 ************************************ 00:06:14.215 10:29:39 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:14.215 Process app_repeat pid: 58266 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58266 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58266' 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.215 spdk_app_start Round 0 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:14.215 10:29:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58266 /var/tmp/spdk-nbd.sock 00:06:14.215 10:29:39 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58266 ']' 00:06:14.215 10:29:39 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.215 10:29:39 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:14.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.215 10:29:39 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.215 10:29:39 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:14.215 10:29:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.215 [2024-11-15 10:29:39.707466] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:14.215 [2024-11-15 10:29:39.707647] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58266 ] 00:06:14.474 [2024-11-15 10:29:39.854857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.474 [2024-11-15 10:29:39.916180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.474 [2024-11-15 10:29:39.916191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.732 [2024-11-15 10:29:39.972633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.732 10:29:40 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.732 10:29:40 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:14.732 10:29:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.990 Malloc0 00:06:14.990 10:29:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.249 Malloc1 00:06:15.249 10:29:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.249 10:29:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.815 /dev/nbd0 00:06:15.815 10:29:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.815 10:29:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.815 1+0 records in 00:06:15.815 1+0 records out 00:06:15.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024113 s, 17.0 MB/s 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:15.815 10:29:41 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:15.815 10:29:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.815 10:29:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.815 10:29:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.074 /dev/nbd1 00:06:16.074 10:29:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.074 10:29:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.074 1+0 records in 00:06:16.074 1+0 records out 00:06:16.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317548 s, 12.9 MB/s 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:16.074 10:29:41 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:16.074 10:29:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.074 10:29:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.074 10:29:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.074 10:29:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.074 10:29:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.333 { 00:06:16.333 "nbd_device": "/dev/nbd0", 00:06:16.333 "bdev_name": "Malloc0" 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "nbd_device": "/dev/nbd1", 00:06:16.333 "bdev_name": "Malloc1" 00:06:16.333 } 00:06:16.333 ]' 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.333 { 00:06:16.333 "nbd_device": "/dev/nbd0", 00:06:16.333 "bdev_name": "Malloc0" 00:06:16.333 }, 00:06:16.333 { 00:06:16.333 "nbd_device": "/dev/nbd1", 00:06:16.333 "bdev_name": "Malloc1" 00:06:16.333 } 00:06:16.333 ]' 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.333 /dev/nbd1' 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.333 /dev/nbd1' 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.333 256+0 records in 00:06:16.333 256+0 records out 00:06:16.333 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104901 s, 100 MB/s 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.333 10:29:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.592 256+0 records in 00:06:16.592 256+0 records out 00:06:16.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248973 s, 42.1 MB/s 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.592 256+0 records in 00:06:16.592 256+0 records out 00:06:16.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241202 s, 43.5 MB/s 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.592 10:29:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.853 10:29:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.853 10:29:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.853 10:29:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.853 10:29:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.853 10:29:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.854 10:29:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.854 10:29:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.854 10:29:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.854 10:29:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.854 10:29:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.112 10:29:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.112 10:29:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.112 10:29:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.112 10:29:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.112 10:29:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.112 10:29:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.112 10:29:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.112 10:29:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.112 10:29:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.112 10:29:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.112 10:29:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.371 10:29:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.371 10:29:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.371 10:29:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.371 10:29:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.371 10:29:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.371 10:29:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.371 10:29:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.371 10:29:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.371 10:29:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.371 10:29:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.371 10:29:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.371 10:29:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.371 10:29:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.629 10:29:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.889 [2024-11-15 10:29:43.257621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.889 [2024-11-15 10:29:43.318831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.889 [2024-11-15 10:29:43.318842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.889 [2024-11-15 10:29:43.375192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.889 [2024-11-15 10:29:43.375278] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.889 [2024-11-15 10:29:43.375294] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.174 spdk_app_start Round 1 00:06:21.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.174 10:29:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:21.174 10:29:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:21.174 10:29:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58266 /var/tmp/spdk-nbd.sock 00:06:21.174 10:29:46 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58266 ']' 00:06:21.174 10:29:46 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.174 10:29:46 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.174 10:29:46 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.174 10:29:46 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.174 10:29:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.174 10:29:46 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.174 10:29:46 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:21.174 10:29:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.174 Malloc0 00:06:21.174 10:29:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.742 Malloc1 00:06:21.742 10:29:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.742 10:29:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.742 /dev/nbd0 00:06:22.001 10:29:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.002 10:29:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.002 1+0 records in 00:06:22.002 1+0 records out 00:06:22.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367919 s, 11.1 MB/s 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:22.002 10:29:47 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:22.002 10:29:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.002 10:29:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.002 10:29:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.261 /dev/nbd1 00:06:22.261 10:29:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.261 10:29:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.261 1+0 records in 00:06:22.261 1+0 records out 00:06:22.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260442 s, 15.7 MB/s 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:22.261 10:29:47 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:22.261 10:29:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.261 10:29:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.261 10:29:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.261 10:29:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.261 10:29:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.520 { 00:06:22.520 "nbd_device": "/dev/nbd0", 00:06:22.520 "bdev_name": "Malloc0" 00:06:22.520 }, 00:06:22.520 { 00:06:22.520 "nbd_device": "/dev/nbd1", 00:06:22.520 "bdev_name": "Malloc1" 00:06:22.520 } 00:06:22.520 ]' 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.520 { 00:06:22.520 "nbd_device": "/dev/nbd0", 00:06:22.520 "bdev_name": "Malloc0" 00:06:22.520 }, 00:06:22.520 { 00:06:22.520 "nbd_device": "/dev/nbd1", 00:06:22.520 "bdev_name": "Malloc1" 00:06:22.520 } 00:06:22.520 ]' 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.520 /dev/nbd1' 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.520 /dev/nbd1' 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.520 256+0 records in 00:06:22.520 256+0 records out 00:06:22.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767082 s, 137 MB/s 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.520 256+0 records in 00:06:22.520 256+0 records out 00:06:22.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218121 s, 48.1 MB/s 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.520 256+0 records in 00:06:22.520 256+0 records out 00:06:22.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288089 s, 36.4 MB/s 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.520 10:29:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.812 10:29:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.812 10:29:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.812 10:29:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.812 10:29:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.812 10:29:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.812 10:29:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.812 10:29:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.812 10:29:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.812 10:29:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.812 10:29:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.378 10:29:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.636 10:29:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.636 10:29:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.636 10:29:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.636 10:29:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.636 10:29:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.636 10:29:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.636 10:29:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.636 10:29:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.636 10:29:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.636 10:29:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.894 10:29:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.894 [2024-11-15 10:29:49.380877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.153 [2024-11-15 10:29:49.445320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.153 [2024-11-15 10:29:49.445332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.153 [2024-11-15 10:29:49.503350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.153 [2024-11-15 10:29:49.503440] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.153 [2024-11-15 10:29:49.503455] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.438 spdk_app_start Round 2 00:06:27.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.438 10:29:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.438 10:29:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:27.438 10:29:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58266 /var/tmp/spdk-nbd.sock 00:06:27.438 10:29:52 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58266 ']' 00:06:27.438 10:29:52 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.438 10:29:52 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.438 10:29:52 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.438 10:29:52 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.438 10:29:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.438 10:29:52 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.438 10:29:52 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:27.438 10:29:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.438 Malloc0 00:06:27.438 10:29:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.005 Malloc1 00:06:28.005 10:29:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.005 10:29:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.005 /dev/nbd0 00:06:28.263 10:29:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.263 10:29:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.263 10:29:53 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:28.263 10:29:53 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:28.263 10:29:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:28.263 10:29:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:28.264 10:29:53 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:28.264 10:29:53 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:28.264 10:29:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:28.264 10:29:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:28.264 10:29:53 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.264 1+0 records in 00:06:28.264 1+0 records out 00:06:28.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300158 s, 13.6 MB/s 00:06:28.264 10:29:53 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.264 10:29:53 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:28.264 10:29:53 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.264 10:29:53 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:28.264 10:29:53 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:28.264 10:29:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.264 10:29:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.264 10:29:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.522 /dev/nbd1 00:06:28.522 10:29:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.522 10:29:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.522 1+0 records in 00:06:28.522 1+0 records out 00:06:28.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019538 s, 21.0 MB/s 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:28.522 10:29:53 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:28.522 10:29:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.522 10:29:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.522 10:29:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.522 10:29:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.522 10:29:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.781 { 00:06:28.781 "nbd_device": "/dev/nbd0", 00:06:28.781 "bdev_name": "Malloc0" 00:06:28.781 }, 00:06:28.781 { 00:06:28.781 "nbd_device": "/dev/nbd1", 00:06:28.781 "bdev_name": "Malloc1" 00:06:28.781 } 00:06:28.781 ]' 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.781 { 00:06:28.781 "nbd_device": "/dev/nbd0", 00:06:28.781 "bdev_name": "Malloc0" 00:06:28.781 }, 00:06:28.781 { 00:06:28.781 "nbd_device": "/dev/nbd1", 00:06:28.781 "bdev_name": "Malloc1" 00:06:28.781 } 00:06:28.781 ]' 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.781 /dev/nbd1' 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.781 /dev/nbd1' 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.781 256+0 records in 00:06:28.781 256+0 records out 00:06:28.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00872184 s, 120 MB/s 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.781 256+0 records in 00:06:28.781 256+0 records out 00:06:28.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233999 s, 44.8 MB/s 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.781 10:29:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.040 256+0 records in 00:06:29.040 256+0 records out 00:06:29.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280224 s, 37.4 MB/s 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.040 10:29:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.298 10:29:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.298 10:29:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.298 10:29:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.298 10:29:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.298 10:29:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.298 10:29:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.298 10:29:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.298 10:29:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.298 10:29:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.298 10:29:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.556 10:29:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.556 10:29:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.556 10:29:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.556 10:29:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.556 10:29:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.556 10:29:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.556 10:29:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.556 10:29:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.556 10:29:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.556 10:29:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.556 10:29:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.814 10:29:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.814 10:29:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.814 10:29:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.814 10:29:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.814 10:29:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.814 10:29:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.814 10:29:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.814 10:29:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.814 10:29:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.814 10:29:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.814 10:29:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.814 10:29:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.814 10:29:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.381 10:29:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.381 [2024-11-15 10:29:55.767842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.381 [2024-11-15 10:29:55.827701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.381 [2024-11-15 10:29:55.827714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.639 [2024-11-15 10:29:55.882136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.639 [2024-11-15 10:29:55.882229] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.639 [2024-11-15 10:29:55.882245] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.171 10:29:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58266 /var/tmp/spdk-nbd.sock 00:06:33.171 10:29:58 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58266 ']' 00:06:33.171 10:29:58 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.171 10:29:58 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:33.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.171 10:29:58 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.171 10:29:58 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:33.171 10:29:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.430 10:29:58 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.430 10:29:58 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:33.430 10:29:58 event.app_repeat -- event/event.sh@39 -- # killprocess 58266 00:06:33.430 10:29:58 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58266 ']' 00:06:33.430 10:29:58 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58266 00:06:33.430 10:29:58 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:33.430 10:29:58 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:33.430 10:29:58 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58266 00:06:33.430 10:29:58 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:33.430 10:29:58 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:33.430 killing process with pid 58266 00:06:33.430 10:29:58 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58266' 00:06:33.430 10:29:58 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58266 00:06:33.430 10:29:58 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58266 00:06:33.689 spdk_app_start is called in Round 0. 00:06:33.689 Shutdown signal received, stop current app iteration 00:06:33.689 Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 reinitialization... 00:06:33.689 spdk_app_start is called in Round 1. 00:06:33.689 Shutdown signal received, stop current app iteration 00:06:33.689 Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 reinitialization... 00:06:33.689 spdk_app_start is called in Round 2. 00:06:33.689 Shutdown signal received, stop current app iteration 00:06:33.689 Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 reinitialization... 00:06:33.689 spdk_app_start is called in Round 3. 00:06:33.689 Shutdown signal received, stop current app iteration 00:06:33.689 10:29:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:33.689 10:29:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:33.689 00:06:33.689 real 0m19.418s 00:06:33.689 user 0m44.432s 00:06:33.689 sys 0m2.971s 00:06:33.689 10:29:59 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:33.689 10:29:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.689 ************************************ 00:06:33.689 END TEST app_repeat 00:06:33.689 ************************************ 00:06:33.689 10:29:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:33.689 10:29:59 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:33.689 10:29:59 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:33.689 10:29:59 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.689 10:29:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.689 ************************************ 00:06:33.689 START TEST cpu_locks 00:06:33.689 ************************************ 00:06:33.689 10:29:59 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:33.948 * Looking for test storage... 00:06:33.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:33.948 10:29:59 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:33.948 10:29:59 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:33.948 10:29:59 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:33.948 10:29:59 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.948 10:29:59 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:33.948 10:29:59 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.948 10:29:59 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:33.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.948 --rc genhtml_branch_coverage=1 00:06:33.948 --rc genhtml_function_coverage=1 00:06:33.948 --rc genhtml_legend=1 00:06:33.948 --rc geninfo_all_blocks=1 00:06:33.948 --rc geninfo_unexecuted_blocks=1 00:06:33.948 00:06:33.948 ' 00:06:33.949 10:29:59 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:33.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.949 --rc genhtml_branch_coverage=1 00:06:33.949 --rc genhtml_function_coverage=1 00:06:33.949 --rc genhtml_legend=1 00:06:33.949 --rc geninfo_all_blocks=1 00:06:33.949 --rc geninfo_unexecuted_blocks=1 00:06:33.949 00:06:33.949 ' 00:06:33.949 10:29:59 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:33.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.949 --rc genhtml_branch_coverage=1 00:06:33.949 --rc genhtml_function_coverage=1 00:06:33.949 --rc genhtml_legend=1 00:06:33.949 --rc geninfo_all_blocks=1 00:06:33.949 --rc geninfo_unexecuted_blocks=1 00:06:33.949 00:06:33.949 ' 00:06:33.949 10:29:59 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:33.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.949 --rc genhtml_branch_coverage=1 00:06:33.949 --rc genhtml_function_coverage=1 00:06:33.949 --rc genhtml_legend=1 00:06:33.949 --rc geninfo_all_blocks=1 00:06:33.949 --rc geninfo_unexecuted_blocks=1 00:06:33.949 00:06:33.949 ' 00:06:33.949 10:29:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:33.949 10:29:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:33.949 10:29:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:33.949 10:29:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:33.949 10:29:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:33.949 10:29:59 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.949 10:29:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.949 ************************************ 00:06:33.949 START TEST default_locks 00:06:33.949 ************************************ 00:06:33.949 10:29:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:33.949 10:29:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58710 00:06:33.949 10:29:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58710 00:06:33.949 10:29:59 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58710 ']' 00:06:33.949 10:29:59 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.949 10:29:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.949 10:29:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:33.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.949 10:29:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.949 10:29:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:33.949 10:29:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.949 [2024-11-15 10:29:59.427837] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:33.949 [2024-11-15 10:29:59.427997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58710 ] 00:06:34.208 [2024-11-15 10:29:59.576245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.208 [2024-11-15 10:29:59.643457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.466 [2024-11-15 10:29:59.721151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.466 10:29:59 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.466 10:29:59 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:34.466 10:29:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58710 00:06:34.466 10:29:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58710 00:06:34.466 10:29:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.032 10:30:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58710 00:06:35.032 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58710 ']' 00:06:35.032 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58710 00:06:35.032 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:35.032 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:35.032 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58710 00:06:35.032 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:35.032 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:35.032 killing process with pid 58710 00:06:35.032 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58710' 00:06:35.032 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58710 00:06:35.032 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58710 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58710 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58710 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58710 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58710 ']' 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:35.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.599 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58710) - No such process 00:06:35.599 ERROR: process (pid: 58710) is no longer running 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:35.599 00:06:35.599 real 0m1.456s 00:06:35.599 user 0m1.423s 00:06:35.599 sys 0m0.541s 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.599 10:30:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.599 ************************************ 00:06:35.599 END TEST default_locks 00:06:35.599 ************************************ 00:06:35.599 10:30:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:35.599 10:30:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:35.599 10:30:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.599 10:30:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.599 ************************************ 00:06:35.599 START TEST default_locks_via_rpc 00:06:35.599 ************************************ 00:06:35.599 10:30:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:35.599 10:30:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58759 00:06:35.599 10:30:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58759 00:06:35.599 10:30:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.599 10:30:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58759 ']' 00:06:35.599 10:30:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.599 10:30:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:35.599 10:30:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.599 10:30:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:35.599 10:30:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.599 [2024-11-15 10:30:00.932893] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:35.599 [2024-11-15 10:30:00.933005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58759 ] 00:06:35.599 [2024-11-15 10:30:01.082801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.857 [2024-11-15 10:30:01.146117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.857 [2024-11-15 10:30:01.219265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58759 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58759 00:06:36.116 10:30:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.681 10:30:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58759 00:06:36.681 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58759 ']' 00:06:36.681 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58759 00:06:36.681 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:36.681 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:36.681 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58759 00:06:36.681 killing process with pid 58759 00:06:36.681 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:36.681 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:36.681 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58759' 00:06:36.681 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58759 00:06:36.681 10:30:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58759 00:06:36.940 00:06:36.940 real 0m1.500s 00:06:36.940 user 0m1.483s 00:06:36.940 sys 0m0.548s 00:06:36.940 10:30:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.940 ************************************ 00:06:36.940 END TEST default_locks_via_rpc 00:06:36.940 ************************************ 00:06:36.940 10:30:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.940 10:30:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:36.940 10:30:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:36.940 10:30:02 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.940 10:30:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.940 ************************************ 00:06:36.940 START TEST non_locking_app_on_locked_coremask 00:06:36.940 ************************************ 00:06:36.940 10:30:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:36.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.940 10:30:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58798 00:06:36.940 10:30:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58798 /var/tmp/spdk.sock 00:06:36.940 10:30:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.940 10:30:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58798 ']' 00:06:36.940 10:30:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.940 10:30:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:36.940 10:30:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.940 10:30:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:36.940 10:30:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.199 [2024-11-15 10:30:02.488230] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:37.199 [2024-11-15 10:30:02.488369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58798 ] 00:06:37.199 [2024-11-15 10:30:02.636003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.457 [2024-11-15 10:30:02.701534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.457 [2024-11-15 10:30:02.772279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.393 10:30:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:38.393 10:30:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:38.393 10:30:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58814 00:06:38.393 10:30:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:38.393 10:30:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58814 /var/tmp/spdk2.sock 00:06:38.393 10:30:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58814 ']' 00:06:38.393 10:30:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.393 10:30:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:38.393 10:30:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.393 10:30:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:38.393 10:30:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.393 [2024-11-15 10:30:03.638122] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:38.393 [2024-11-15 10:30:03.638220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58814 ] 00:06:38.393 [2024-11-15 10:30:03.802509] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.393 [2024-11-15 10:30:03.802593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.652 [2024-11-15 10:30:03.944447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.652 [2024-11-15 10:30:04.112942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.589 10:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:39.589 10:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:39.589 10:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58798 00:06:39.589 10:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.589 10:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58798 00:06:40.158 10:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58798 00:06:40.158 10:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58798 ']' 00:06:40.158 10:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58798 00:06:40.158 10:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:40.158 10:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:40.158 10:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58798 00:06:40.158 killing process with pid 58798 00:06:40.158 10:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:40.158 10:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:40.158 10:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58798' 00:06:40.158 10:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58798 00:06:40.158 10:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58798 00:06:41.094 10:30:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58814 00:06:41.094 10:30:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58814 ']' 00:06:41.094 10:30:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58814 00:06:41.094 10:30:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:41.094 10:30:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:41.094 10:30:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58814 00:06:41.094 killing process with pid 58814 00:06:41.094 10:30:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:41.094 10:30:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:41.094 10:30:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58814' 00:06:41.094 10:30:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58814 00:06:41.094 10:30:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58814 00:06:41.661 ************************************ 00:06:41.661 END TEST non_locking_app_on_locked_coremask 00:06:41.661 ************************************ 00:06:41.661 00:06:41.661 real 0m4.468s 00:06:41.661 user 0m5.170s 00:06:41.661 sys 0m1.169s 00:06:41.661 10:30:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.661 10:30:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.661 10:30:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:41.661 10:30:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:41.661 10:30:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.661 10:30:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.661 ************************************ 00:06:41.661 START TEST locking_app_on_unlocked_coremask 00:06:41.661 ************************************ 00:06:41.661 10:30:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:41.661 10:30:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58887 00:06:41.661 10:30:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:41.661 10:30:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58887 /var/tmp/spdk.sock 00:06:41.661 10:30:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58887 ']' 00:06:41.661 10:30:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.661 10:30:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:41.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.662 10:30:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.662 10:30:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:41.662 10:30:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.662 [2024-11-15 10:30:07.005455] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:41.662 [2024-11-15 10:30:07.005614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58887 ] 00:06:41.662 [2024-11-15 10:30:07.153441] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.662 [2024-11-15 10:30:07.153599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.948 [2024-11-15 10:30:07.239723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.948 [2024-11-15 10:30:07.314929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.884 10:30:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.884 10:30:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:42.884 10:30:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58903 00:06:42.884 10:30:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.884 10:30:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58903 /var/tmp/spdk2.sock 00:06:42.884 10:30:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58903 ']' 00:06:42.884 10:30:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.884 10:30:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.884 10:30:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.884 10:30:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.884 10:30:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.884 [2024-11-15 10:30:08.145279] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:42.884 [2024-11-15 10:30:08.145913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58903 ] 00:06:42.884 [2024-11-15 10:30:08.321187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.143 [2024-11-15 10:30:08.472847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.143 [2024-11-15 10:30:08.626047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.710 10:30:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:43.710 10:30:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:43.710 10:30:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58903 00:06:43.710 10:30:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58903 00:06:43.710 10:30:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.645 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58887 00:06:44.645 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58887 ']' 00:06:44.645 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58887 00:06:44.645 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:44.645 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:44.645 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58887 00:06:44.645 killing process with pid 58887 00:06:44.645 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:44.645 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:44.645 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58887' 00:06:44.645 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58887 00:06:44.645 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58887 00:06:45.580 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58903 00:06:45.580 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58903 ']' 00:06:45.580 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58903 00:06:45.580 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:45.580 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:45.580 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58903 00:06:45.580 killing process with pid 58903 00:06:45.580 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:45.580 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:45.580 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58903' 00:06:45.580 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58903 00:06:45.580 10:30:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58903 00:06:45.838 ************************************ 00:06:45.838 END TEST locking_app_on_unlocked_coremask 00:06:45.838 ************************************ 00:06:45.838 00:06:45.838 real 0m4.359s 00:06:45.838 user 0m4.984s 00:06:45.838 sys 0m1.166s 00:06:45.838 10:30:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.838 10:30:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.838 10:30:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:45.838 10:30:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.838 10:30:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.838 10:30:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.097 ************************************ 00:06:46.097 START TEST locking_app_on_locked_coremask 00:06:46.097 ************************************ 00:06:46.097 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:46.097 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58975 00:06:46.097 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58975 /var/tmp/spdk.sock 00:06:46.097 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.097 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58975 ']' 00:06:46.097 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.097 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:46.097 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.097 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:46.097 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.097 [2024-11-15 10:30:11.408288] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:46.097 [2024-11-15 10:30:11.408392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58975 ] 00:06:46.097 [2024-11-15 10:30:11.556982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.355 [2024-11-15 10:30:11.621542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.355 [2024-11-15 10:30:11.692799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58984 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58984 /var/tmp/spdk2.sock 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58984 /var/tmp/spdk2.sock 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:46.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58984 /var/tmp/spdk2.sock 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58984 ']' 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:46.614 10:30:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.614 [2024-11-15 10:30:11.965090] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:46.614 [2024-11-15 10:30:11.965459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58984 ] 00:06:46.871 [2024-11-15 10:30:12.129552] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58975 has claimed it. 00:06:46.871 [2024-11-15 10:30:12.129635] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.453 ERROR: process (pid: 58984) is no longer running 00:06:47.453 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58984) - No such process 00:06:47.453 10:30:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:47.453 10:30:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:47.453 10:30:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:47.453 10:30:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.453 10:30:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:47.453 10:30:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.453 10:30:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58975 00:06:47.453 10:30:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58975 00:06:47.453 10:30:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.711 10:30:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58975 00:06:47.711 10:30:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58975 ']' 00:06:47.711 10:30:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58975 00:06:47.711 10:30:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:47.711 10:30:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:47.711 10:30:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58975 00:06:47.711 killing process with pid 58975 00:06:47.711 10:30:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:47.711 10:30:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:47.711 10:30:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58975' 00:06:47.711 10:30:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58975 00:06:47.711 10:30:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58975 00:06:48.277 ************************************ 00:06:48.277 END TEST locking_app_on_locked_coremask 00:06:48.277 ************************************ 00:06:48.277 00:06:48.277 real 0m2.220s 00:06:48.277 user 0m2.556s 00:06:48.277 sys 0m0.586s 00:06:48.277 10:30:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.277 10:30:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.277 10:30:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:48.277 10:30:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.277 10:30:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.277 10:30:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.277 ************************************ 00:06:48.277 START TEST locking_overlapped_coremask 00:06:48.277 ************************************ 00:06:48.277 10:30:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:48.277 10:30:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59029 00:06:48.277 10:30:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59029 /var/tmp/spdk.sock 00:06:48.277 10:30:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59029 ']' 00:06:48.278 10:30:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:48.278 10:30:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.278 10:30:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:48.278 10:30:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.278 10:30:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:48.278 10:30:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.278 [2024-11-15 10:30:13.680550] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:48.278 [2024-11-15 10:30:13.680666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59029 ] 00:06:48.535 [2024-11-15 10:30:13.833425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.535 [2024-11-15 10:30:13.909840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.535 [2024-11-15 10:30:13.909949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.535 [2024-11-15 10:30:13.909960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.535 [2024-11-15 10:30:13.989726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59047 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59047 /var/tmp/spdk2.sock 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59047 /var/tmp/spdk2.sock 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59047 /var/tmp/spdk2.sock 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59047 ']' 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.468 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:49.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.469 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.469 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:49.469 10:30:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.469 [2024-11-15 10:30:14.789015] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:49.469 [2024-11-15 10:30:14.789119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59047 ] 00:06:49.469 [2024-11-15 10:30:14.950903] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59029 has claimed it. 00:06:49.469 [2024-11-15 10:30:14.950992] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.035 ERROR: process (pid: 59047) is no longer running 00:06:50.035 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59047) - No such process 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59029 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59029 ']' 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59029 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:50.035 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59029 00:06:50.293 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:50.293 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:50.293 killing process with pid 59029 00:06:50.294 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59029' 00:06:50.294 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59029 00:06:50.294 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59029 00:06:50.552 00:06:50.552 real 0m2.334s 00:06:50.552 user 0m6.617s 00:06:50.552 sys 0m0.472s 00:06:50.552 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.552 10:30:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.552 ************************************ 00:06:50.552 END TEST locking_overlapped_coremask 00:06:50.552 ************************************ 00:06:50.552 10:30:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:50.552 10:30:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.552 10:30:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.552 10:30:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.552 ************************************ 00:06:50.552 START TEST locking_overlapped_coremask_via_rpc 00:06:50.552 ************************************ 00:06:50.552 10:30:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:50.552 10:30:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59093 00:06:50.552 10:30:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59093 /var/tmp/spdk.sock 00:06:50.552 10:30:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:50.552 10:30:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59093 ']' 00:06:50.552 10:30:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.552 10:30:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.552 10:30:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.552 10:30:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.552 10:30:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.810 [2024-11-15 10:30:16.063175] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:50.810 [2024-11-15 10:30:16.063291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59093 ] 00:06:50.810 [2024-11-15 10:30:16.219111] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.810 [2024-11-15 10:30:16.219221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.068 [2024-11-15 10:30:16.309310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.068 [2024-11-15 10:30:16.309444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.068 [2024-11-15 10:30:16.309454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.068 [2024-11-15 10:30:16.390715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.660 10:30:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:51.660 10:30:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:51.660 10:30:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59111 00:06:51.660 10:30:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:51.660 10:30:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59111 /var/tmp/spdk2.sock 00:06:51.660 10:30:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59111 ']' 00:06:51.660 10:30:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.660 10:30:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:51.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.660 10:30:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.660 10:30:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:51.660 10:30:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.660 [2024-11-15 10:30:17.109782] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:51.660 [2024-11-15 10:30:17.109893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59111 ] 00:06:51.940 [2024-11-15 10:30:17.278242] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.940 [2024-11-15 10:30:17.278301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.940 [2024-11-15 10:30:17.413324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.940 [2024-11-15 10:30:17.413428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.940 [2024-11-15 10:30:17.413428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:52.199 [2024-11-15 10:30:17.560045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.766 [2024-11-15 10:30:18.136647] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59093 has claimed it. 00:06:52.766 request: 00:06:52.766 { 00:06:52.766 "method": "framework_enable_cpumask_locks", 00:06:52.766 "req_id": 1 00:06:52.766 } 00:06:52.766 Got JSON-RPC error response 00:06:52.766 response: 00:06:52.766 { 00:06:52.766 "code": -32603, 00:06:52.766 "message": "Failed to claim CPU core: 2" 00:06:52.766 } 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59093 /var/tmp/spdk.sock 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59093 ']' 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.766 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.024 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.024 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:53.024 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59111 /var/tmp/spdk2.sock 00:06:53.024 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59111 ']' 00:06:53.024 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.024 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.024 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.024 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.024 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.283 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.283 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:53.283 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.283 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.283 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.283 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.283 00:06:53.283 real 0m2.697s 00:06:53.283 user 0m1.421s 00:06:53.283 sys 0m0.201s 00:06:53.283 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:53.283 ************************************ 00:06:53.283 END TEST locking_overlapped_coremask_via_rpc 00:06:53.283 ************************************ 00:06:53.283 10:30:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.283 10:30:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.283 10:30:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59093 ]] 00:06:53.283 10:30:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59093 00:06:53.283 10:30:18 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59093 ']' 00:06:53.283 10:30:18 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59093 00:06:53.283 10:30:18 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:53.283 10:30:18 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:53.283 10:30:18 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59093 00:06:53.283 10:30:18 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:53.283 killing process with pid 59093 00:06:53.283 10:30:18 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:53.283 10:30:18 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59093' 00:06:53.283 10:30:18 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59093 00:06:53.283 10:30:18 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59093 00:06:53.850 10:30:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59111 ]] 00:06:53.850 10:30:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59111 00:06:53.850 10:30:19 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59111 ']' 00:06:53.850 10:30:19 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59111 00:06:53.850 10:30:19 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:53.850 10:30:19 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:53.850 10:30:19 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59111 00:06:53.850 10:30:19 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:53.850 10:30:19 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:53.850 killing process with pid 59111 00:06:53.850 10:30:19 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59111' 00:06:53.850 10:30:19 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59111 00:06:53.850 10:30:19 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59111 00:06:54.108 10:30:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.108 10:30:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:54.108 10:30:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59093 ]] 00:06:54.108 10:30:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59093 00:06:54.108 10:30:19 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59093 ']' 00:06:54.108 10:30:19 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59093 00:06:54.109 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59093) - No such process 00:06:54.109 Process with pid 59093 is not found 00:06:54.109 10:30:19 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59093 is not found' 00:06:54.109 10:30:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59111 ]] 00:06:54.109 10:30:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59111 00:06:54.109 10:30:19 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59111 ']' 00:06:54.109 10:30:19 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59111 00:06:54.109 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59111) - No such process 00:06:54.109 Process with pid 59111 is not found 00:06:54.109 10:30:19 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59111 is not found' 00:06:54.109 10:30:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.109 00:06:54.109 real 0m20.455s 00:06:54.109 user 0m36.619s 00:06:54.109 sys 0m5.610s 00:06:54.109 10:30:19 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.109 10:30:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.109 ************************************ 00:06:54.109 END TEST cpu_locks 00:06:54.109 ************************************ 00:06:54.367 00:06:54.367 real 0m48.932s 00:06:54.367 user 1m36.586s 00:06:54.367 sys 0m9.412s 00:06:54.367 10:30:19 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.367 10:30:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.367 ************************************ 00:06:54.367 END TEST event 00:06:54.367 ************************************ 00:06:54.367 10:30:19 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:54.367 10:30:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:54.367 10:30:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.367 10:30:19 -- common/autotest_common.sh@10 -- # set +x 00:06:54.367 ************************************ 00:06:54.367 START TEST thread 00:06:54.367 ************************************ 00:06:54.367 10:30:19 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:54.367 * Looking for test storage... 00:06:54.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:54.367 10:30:19 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:54.367 10:30:19 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:54.367 10:30:19 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:54.367 10:30:19 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:54.367 10:30:19 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.367 10:30:19 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.367 10:30:19 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.367 10:30:19 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.367 10:30:19 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.367 10:30:19 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.367 10:30:19 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.367 10:30:19 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.367 10:30:19 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.367 10:30:19 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.367 10:30:19 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.367 10:30:19 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:54.367 10:30:19 thread -- scripts/common.sh@345 -- # : 1 00:06:54.367 10:30:19 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.367 10:30:19 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.367 10:30:19 thread -- scripts/common.sh@365 -- # decimal 1 00:06:54.367 10:30:19 thread -- scripts/common.sh@353 -- # local d=1 00:06:54.367 10:30:19 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.367 10:30:19 thread -- scripts/common.sh@355 -- # echo 1 00:06:54.367 10:30:19 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.367 10:30:19 thread -- scripts/common.sh@366 -- # decimal 2 00:06:54.367 10:30:19 thread -- scripts/common.sh@353 -- # local d=2 00:06:54.367 10:30:19 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.367 10:30:19 thread -- scripts/common.sh@355 -- # echo 2 00:06:54.367 10:30:19 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.367 10:30:19 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.367 10:30:19 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.367 10:30:19 thread -- scripts/common.sh@368 -- # return 0 00:06:54.367 10:30:19 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.367 10:30:19 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:54.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.367 --rc genhtml_branch_coverage=1 00:06:54.367 --rc genhtml_function_coverage=1 00:06:54.367 --rc genhtml_legend=1 00:06:54.367 --rc geninfo_all_blocks=1 00:06:54.367 --rc geninfo_unexecuted_blocks=1 00:06:54.367 00:06:54.367 ' 00:06:54.367 10:30:19 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:54.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.367 --rc genhtml_branch_coverage=1 00:06:54.367 --rc genhtml_function_coverage=1 00:06:54.367 --rc genhtml_legend=1 00:06:54.367 --rc geninfo_all_blocks=1 00:06:54.367 --rc geninfo_unexecuted_blocks=1 00:06:54.367 00:06:54.367 ' 00:06:54.367 10:30:19 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:54.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.367 --rc genhtml_branch_coverage=1 00:06:54.367 --rc genhtml_function_coverage=1 00:06:54.367 --rc genhtml_legend=1 00:06:54.367 --rc geninfo_all_blocks=1 00:06:54.367 --rc geninfo_unexecuted_blocks=1 00:06:54.367 00:06:54.367 ' 00:06:54.367 10:30:19 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:54.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.367 --rc genhtml_branch_coverage=1 00:06:54.367 --rc genhtml_function_coverage=1 00:06:54.367 --rc genhtml_legend=1 00:06:54.367 --rc geninfo_all_blocks=1 00:06:54.367 --rc geninfo_unexecuted_blocks=1 00:06:54.367 00:06:54.367 ' 00:06:54.367 10:30:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.367 10:30:19 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:54.367 10:30:19 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.367 10:30:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.625 ************************************ 00:06:54.625 START TEST thread_poller_perf 00:06:54.625 ************************************ 00:06:54.625 10:30:19 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.625 [2024-11-15 10:30:19.889450] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:54.625 [2024-11-15 10:30:19.889553] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59241 ] 00:06:54.625 [2024-11-15 10:30:20.032230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.625 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:54.625 [2024-11-15 10:30:20.096764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.000 [2024-11-15T10:30:21.498Z] ====================================== 00:06:56.000 [2024-11-15T10:30:21.498Z] busy:2206273718 (cyc) 00:06:56.000 [2024-11-15T10:30:21.498Z] total_run_count: 311000 00:06:56.000 [2024-11-15T10:30:21.498Z] tsc_hz: 2200000000 (cyc) 00:06:56.000 [2024-11-15T10:30:21.498Z] ====================================== 00:06:56.000 [2024-11-15T10:30:21.498Z] poller_cost: 7094 (cyc), 3224 (nsec) 00:06:56.000 00:06:56.000 real 0m1.289s 00:06:56.000 user 0m1.135s 00:06:56.000 sys 0m0.046s 00:06:56.000 10:30:21 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.000 10:30:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.000 ************************************ 00:06:56.000 END TEST thread_poller_perf 00:06:56.000 ************************************ 00:06:56.000 10:30:21 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.000 10:30:21 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:56.000 10:30:21 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.000 10:30:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.000 ************************************ 00:06:56.000 START TEST thread_poller_perf 00:06:56.000 ************************************ 00:06:56.000 10:30:21 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.000 [2024-11-15 10:30:21.225116] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:56.000 [2024-11-15 10:30:21.225238] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59277 ] 00:06:56.000 [2024-11-15 10:30:21.377990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.000 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.000 [2024-11-15 10:30:21.444288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.399 [2024-11-15T10:30:22.897Z] ====================================== 00:06:57.399 [2024-11-15T10:30:22.897Z] busy:2202327179 (cyc) 00:06:57.399 [2024-11-15T10:30:22.897Z] total_run_count: 4079000 00:06:57.399 [2024-11-15T10:30:22.897Z] tsc_hz: 2200000000 (cyc) 00:06:57.399 [2024-11-15T10:30:22.897Z] ====================================== 00:06:57.399 [2024-11-15T10:30:22.897Z] poller_cost: 539 (cyc), 245 (nsec) 00:06:57.399 00:06:57.399 real 0m1.289s 00:06:57.399 user 0m1.135s 00:06:57.399 sys 0m0.047s 00:06:57.399 10:30:22 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.399 10:30:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.399 ************************************ 00:06:57.399 END TEST thread_poller_perf 00:06:57.400 ************************************ 00:06:57.400 10:30:22 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:57.400 00:06:57.400 real 0m2.854s 00:06:57.400 user 0m2.416s 00:06:57.400 sys 0m0.224s 00:06:57.400 10:30:22 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.400 10:30:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.400 ************************************ 00:06:57.400 END TEST thread 00:06:57.400 ************************************ 00:06:57.400 10:30:22 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:57.400 10:30:22 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.400 10:30:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:57.400 10:30:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.400 10:30:22 -- common/autotest_common.sh@10 -- # set +x 00:06:57.400 ************************************ 00:06:57.400 START TEST app_cmdline 00:06:57.400 ************************************ 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.400 * Looking for test storage... 00:06:57.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.400 10:30:22 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.400 --rc genhtml_branch_coverage=1 00:06:57.400 --rc genhtml_function_coverage=1 00:06:57.400 --rc genhtml_legend=1 00:06:57.400 --rc geninfo_all_blocks=1 00:06:57.400 --rc geninfo_unexecuted_blocks=1 00:06:57.400 00:06:57.400 ' 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.400 --rc genhtml_branch_coverage=1 00:06:57.400 --rc genhtml_function_coverage=1 00:06:57.400 --rc genhtml_legend=1 00:06:57.400 --rc geninfo_all_blocks=1 00:06:57.400 --rc geninfo_unexecuted_blocks=1 00:06:57.400 00:06:57.400 ' 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.400 --rc genhtml_branch_coverage=1 00:06:57.400 --rc genhtml_function_coverage=1 00:06:57.400 --rc genhtml_legend=1 00:06:57.400 --rc geninfo_all_blocks=1 00:06:57.400 --rc geninfo_unexecuted_blocks=1 00:06:57.400 00:06:57.400 ' 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.400 --rc genhtml_branch_coverage=1 00:06:57.400 --rc genhtml_function_coverage=1 00:06:57.400 --rc genhtml_legend=1 00:06:57.400 --rc geninfo_all_blocks=1 00:06:57.400 --rc geninfo_unexecuted_blocks=1 00:06:57.400 00:06:57.400 ' 00:06:57.400 10:30:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:57.400 10:30:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59354 00:06:57.400 10:30:22 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:57.400 10:30:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59354 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59354 ']' 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.400 10:30:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.400 [2024-11-15 10:30:22.866292] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:57.400 [2024-11-15 10:30:22.866389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59354 ] 00:06:57.659 [2024-11-15 10:30:23.019232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.659 [2024-11-15 10:30:23.108593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.917 [2024-11-15 10:30:23.189022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.500 10:30:23 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.500 10:30:23 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:58.500 10:30:23 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:58.757 { 00:06:58.757 "version": "SPDK v25.01-pre git sha1 dec6d3843", 00:06:58.757 "fields": { 00:06:58.757 "major": 25, 00:06:58.757 "minor": 1, 00:06:58.757 "patch": 0, 00:06:58.757 "suffix": "-pre", 00:06:58.757 "commit": "dec6d3843" 00:06:58.757 } 00:06:58.757 } 00:06:58.757 10:30:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:58.757 10:30:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:58.757 10:30:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:58.757 10:30:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:58.757 10:30:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:58.757 10:30:24 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.757 10:30:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.758 10:30:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:58.758 10:30:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:58.758 10:30:24 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.758 10:30:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:58.758 10:30:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:58.758 10:30:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.758 10:30:24 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:58.758 10:30:24 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.758 10:30:24 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.758 10:30:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.758 10:30:24 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.758 10:30:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.758 10:30:24 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.017 10:30:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.017 10:30:24 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.017 10:30:24 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:59.017 10:30:24 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.017 request: 00:06:59.017 { 00:06:59.017 "method": "env_dpdk_get_mem_stats", 00:06:59.017 "req_id": 1 00:06:59.017 } 00:06:59.017 Got JSON-RPC error response 00:06:59.017 response: 00:06:59.017 { 00:06:59.017 "code": -32601, 00:06:59.017 "message": "Method not found" 00:06:59.017 } 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.276 10:30:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59354 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59354 ']' 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59354 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59354 00:06:59.276 killing process with pid 59354 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59354' 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@971 -- # kill 59354 00:06:59.276 10:30:24 app_cmdline -- common/autotest_common.sh@976 -- # wait 59354 00:06:59.535 ************************************ 00:06:59.535 END TEST app_cmdline 00:06:59.535 ************************************ 00:06:59.535 00:06:59.535 real 0m2.352s 00:06:59.535 user 0m2.955s 00:06:59.535 sys 0m0.513s 00:06:59.535 10:30:24 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.535 10:30:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.535 10:30:24 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.535 10:30:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:59.535 10:30:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.535 10:30:24 -- common/autotest_common.sh@10 -- # set +x 00:06:59.535 ************************************ 00:06:59.535 START TEST version 00:06:59.535 ************************************ 00:06:59.535 10:30:24 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.794 * Looking for test storage... 00:06:59.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:59.794 10:30:25 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:59.794 10:30:25 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:59.794 10:30:25 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:59.794 10:30:25 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:59.794 10:30:25 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.794 10:30:25 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.794 10:30:25 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.794 10:30:25 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.794 10:30:25 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.794 10:30:25 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.794 10:30:25 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.794 10:30:25 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.794 10:30:25 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.794 10:30:25 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.794 10:30:25 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.795 10:30:25 version -- scripts/common.sh@344 -- # case "$op" in 00:06:59.795 10:30:25 version -- scripts/common.sh@345 -- # : 1 00:06:59.795 10:30:25 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.795 10:30:25 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.795 10:30:25 version -- scripts/common.sh@365 -- # decimal 1 00:06:59.795 10:30:25 version -- scripts/common.sh@353 -- # local d=1 00:06:59.795 10:30:25 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.795 10:30:25 version -- scripts/common.sh@355 -- # echo 1 00:06:59.795 10:30:25 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.795 10:30:25 version -- scripts/common.sh@366 -- # decimal 2 00:06:59.795 10:30:25 version -- scripts/common.sh@353 -- # local d=2 00:06:59.795 10:30:25 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.795 10:30:25 version -- scripts/common.sh@355 -- # echo 2 00:06:59.795 10:30:25 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.795 10:30:25 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.795 10:30:25 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.795 10:30:25 version -- scripts/common.sh@368 -- # return 0 00:06:59.795 10:30:25 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.795 10:30:25 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:59.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.795 --rc genhtml_branch_coverage=1 00:06:59.795 --rc genhtml_function_coverage=1 00:06:59.795 --rc genhtml_legend=1 00:06:59.795 --rc geninfo_all_blocks=1 00:06:59.795 --rc geninfo_unexecuted_blocks=1 00:06:59.795 00:06:59.795 ' 00:06:59.795 10:30:25 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:59.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.795 --rc genhtml_branch_coverage=1 00:06:59.795 --rc genhtml_function_coverage=1 00:06:59.795 --rc genhtml_legend=1 00:06:59.795 --rc geninfo_all_blocks=1 00:06:59.795 --rc geninfo_unexecuted_blocks=1 00:06:59.795 00:06:59.795 ' 00:06:59.795 10:30:25 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:59.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.795 --rc genhtml_branch_coverage=1 00:06:59.795 --rc genhtml_function_coverage=1 00:06:59.795 --rc genhtml_legend=1 00:06:59.795 --rc geninfo_all_blocks=1 00:06:59.795 --rc geninfo_unexecuted_blocks=1 00:06:59.795 00:06:59.795 ' 00:06:59.795 10:30:25 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:59.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.795 --rc genhtml_branch_coverage=1 00:06:59.795 --rc genhtml_function_coverage=1 00:06:59.795 --rc genhtml_legend=1 00:06:59.795 --rc geninfo_all_blocks=1 00:06:59.795 --rc geninfo_unexecuted_blocks=1 00:06:59.795 00:06:59.795 ' 00:06:59.795 10:30:25 version -- app/version.sh@17 -- # get_header_version major 00:06:59.795 10:30:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.795 10:30:25 version -- app/version.sh@14 -- # cut -f2 00:06:59.795 10:30:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.795 10:30:25 version -- app/version.sh@17 -- # major=25 00:06:59.795 10:30:25 version -- app/version.sh@18 -- # get_header_version minor 00:06:59.795 10:30:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.795 10:30:25 version -- app/version.sh@14 -- # cut -f2 00:06:59.795 10:30:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.795 10:30:25 version -- app/version.sh@18 -- # minor=1 00:06:59.795 10:30:25 version -- app/version.sh@19 -- # get_header_version patch 00:06:59.795 10:30:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.795 10:30:25 version -- app/version.sh@14 -- # cut -f2 00:06:59.795 10:30:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.795 10:30:25 version -- app/version.sh@19 -- # patch=0 00:06:59.795 10:30:25 version -- app/version.sh@20 -- # get_header_version suffix 00:06:59.795 10:30:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.795 10:30:25 version -- app/version.sh@14 -- # cut -f2 00:06:59.795 10:30:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.795 10:30:25 version -- app/version.sh@20 -- # suffix=-pre 00:06:59.795 10:30:25 version -- app/version.sh@22 -- # version=25.1 00:06:59.795 10:30:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:59.795 10:30:25 version -- app/version.sh@28 -- # version=25.1rc0 00:06:59.795 10:30:25 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:59.795 10:30:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:59.795 10:30:25 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:59.795 10:30:25 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:59.795 00:06:59.795 real 0m0.287s 00:06:59.795 user 0m0.191s 00:06:59.795 sys 0m0.124s 00:06:59.795 ************************************ 00:06:59.795 END TEST version 00:06:59.795 ************************************ 00:06:59.795 10:30:25 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.795 10:30:25 version -- common/autotest_common.sh@10 -- # set +x 00:07:00.054 10:30:25 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:00.054 10:30:25 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:00.054 10:30:25 -- spdk/autotest.sh@194 -- # uname -s 00:07:00.054 10:30:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:00.054 10:30:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:00.054 10:30:25 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:00.054 10:30:25 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:00.054 10:30:25 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:00.054 10:30:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:00.054 10:30:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.054 10:30:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.054 ************************************ 00:07:00.054 START TEST spdk_dd 00:07:00.054 ************************************ 00:07:00.054 10:30:25 spdk_dd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:00.054 * Looking for test storage... 00:07:00.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:00.054 10:30:25 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:00.054 10:30:25 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:07:00.054 10:30:25 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:00.054 10:30:25 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.054 10:30:25 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:00.054 10:30:25 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.055 10:30:25 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:00.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.055 --rc genhtml_branch_coverage=1 00:07:00.055 --rc genhtml_function_coverage=1 00:07:00.055 --rc genhtml_legend=1 00:07:00.055 --rc geninfo_all_blocks=1 00:07:00.055 --rc geninfo_unexecuted_blocks=1 00:07:00.055 00:07:00.055 ' 00:07:00.055 10:30:25 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:00.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.055 --rc genhtml_branch_coverage=1 00:07:00.055 --rc genhtml_function_coverage=1 00:07:00.055 --rc genhtml_legend=1 00:07:00.055 --rc geninfo_all_blocks=1 00:07:00.055 --rc geninfo_unexecuted_blocks=1 00:07:00.055 00:07:00.055 ' 00:07:00.055 10:30:25 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:00.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.055 --rc genhtml_branch_coverage=1 00:07:00.055 --rc genhtml_function_coverage=1 00:07:00.055 --rc genhtml_legend=1 00:07:00.055 --rc geninfo_all_blocks=1 00:07:00.055 --rc geninfo_unexecuted_blocks=1 00:07:00.055 00:07:00.055 ' 00:07:00.055 10:30:25 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:00.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.055 --rc genhtml_branch_coverage=1 00:07:00.055 --rc genhtml_function_coverage=1 00:07:00.055 --rc genhtml_legend=1 00:07:00.055 --rc geninfo_all_blocks=1 00:07:00.055 --rc geninfo_unexecuted_blocks=1 00:07:00.055 00:07:00.055 ' 00:07:00.055 10:30:25 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.055 10:30:25 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.055 10:30:25 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.055 10:30:25 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.055 10:30:25 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.055 10:30:25 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.055 10:30:25 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.055 10:30:25 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.055 10:30:25 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:00.055 10:30:25 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.055 10:30:25 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:00.313 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:00.572 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:00.572 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:00.572 10:30:25 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:00.572 10:30:25 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:00.572 10:30:25 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:00.572 10:30:25 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.572 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:00.573 10:30:25 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:00.573 * spdk_dd linked to liburing 00:07:00.574 10:30:25 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:00.574 10:30:25 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:00.574 10:30:25 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:00.574 10:30:25 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:00.574 10:30:25 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:00.574 10:30:25 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:00.574 10:30:25 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:00.574 10:30:25 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:00.574 10:30:25 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:00.574 10:30:25 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:00.574 10:30:25 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.574 10:30:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:00.574 ************************************ 00:07:00.574 START TEST spdk_dd_basic_rw 00:07:00.574 ************************************ 00:07:00.574 10:30:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:00.574 * Looking for test storage... 00:07:00.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:00.574 10:30:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:00.574 10:30:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:00.574 10:30:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.833 10:30:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:00.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.833 --rc genhtml_branch_coverage=1 00:07:00.833 --rc genhtml_function_coverage=1 00:07:00.833 --rc genhtml_legend=1 00:07:00.834 --rc geninfo_all_blocks=1 00:07:00.834 --rc geninfo_unexecuted_blocks=1 00:07:00.834 00:07:00.834 ' 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:00.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.834 --rc genhtml_branch_coverage=1 00:07:00.834 --rc genhtml_function_coverage=1 00:07:00.834 --rc genhtml_legend=1 00:07:00.834 --rc geninfo_all_blocks=1 00:07:00.834 --rc geninfo_unexecuted_blocks=1 00:07:00.834 00:07:00.834 ' 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:00.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.834 --rc genhtml_branch_coverage=1 00:07:00.834 --rc genhtml_function_coverage=1 00:07:00.834 --rc genhtml_legend=1 00:07:00.834 --rc geninfo_all_blocks=1 00:07:00.834 --rc geninfo_unexecuted_blocks=1 00:07:00.834 00:07:00.834 ' 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:00.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.834 --rc genhtml_branch_coverage=1 00:07:00.834 --rc genhtml_function_coverage=1 00:07:00.834 --rc genhtml_legend=1 00:07:00.834 --rc geninfo_all_blocks=1 00:07:00.834 --rc geninfo_unexecuted_blocks=1 00:07:00.834 00:07:00.834 ' 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:00.834 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:01.096 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:01.096 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.097 ************************************ 00:07:01.097 START TEST dd_bs_lt_native_bs 00:07:01.097 ************************************ 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1127 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.097 10:30:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:01.097 { 00:07:01.097 "subsystems": [ 00:07:01.097 { 00:07:01.097 "subsystem": "bdev", 00:07:01.097 "config": [ 00:07:01.097 { 00:07:01.097 "params": { 00:07:01.097 "trtype": "pcie", 00:07:01.097 "traddr": "0000:00:10.0", 00:07:01.097 "name": "Nvme0" 00:07:01.097 }, 00:07:01.097 "method": "bdev_nvme_attach_controller" 00:07:01.097 }, 00:07:01.097 { 00:07:01.097 "method": "bdev_wait_for_examine" 00:07:01.097 } 00:07:01.097 ] 00:07:01.097 } 00:07:01.097 ] 00:07:01.097 } 00:07:01.097 [2024-11-15 10:30:26.454732] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:01.097 [2024-11-15 10:30:26.454843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59711 ] 00:07:01.366 [2024-11-15 10:30:26.606534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.366 [2024-11-15 10:30:26.677541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.366 [2024-11-15 10:30:26.736864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.366 [2024-11-15 10:30:26.854687] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:01.366 [2024-11-15 10:30:26.854811] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.633 [2024-11-15 10:30:26.988572] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:01.633 ************************************ 00:07:01.633 END TEST dd_bs_lt_native_bs 00:07:01.633 ************************************ 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.633 00:07:01.633 real 0m0.663s 00:07:01.633 user 0m0.445s 00:07:01.633 sys 0m0.173s 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.633 ************************************ 00:07:01.633 START TEST dd_rw 00:07:01.633 ************************************ 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1127 -- # basic_rw 4096 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:01.633 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.567 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:02.567 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:02.567 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:02.567 10:30:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.567 { 00:07:02.567 "subsystems": [ 00:07:02.567 { 00:07:02.567 "subsystem": "bdev", 00:07:02.567 "config": [ 00:07:02.567 { 00:07:02.567 "params": { 00:07:02.567 "trtype": "pcie", 00:07:02.567 "traddr": "0000:00:10.0", 00:07:02.567 "name": "Nvme0" 00:07:02.567 }, 00:07:02.567 "method": "bdev_nvme_attach_controller" 00:07:02.567 }, 00:07:02.567 { 00:07:02.567 "method": "bdev_wait_for_examine" 00:07:02.567 } 00:07:02.567 ] 00:07:02.567 } 00:07:02.567 ] 00:07:02.567 } 00:07:02.567 [2024-11-15 10:30:27.816209] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:02.567 [2024-11-15 10:30:27.816479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59742 ] 00:07:02.567 [2024-11-15 10:30:27.965174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.567 [2024-11-15 10:30:28.028152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.826 [2024-11-15 10:30:28.084963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.826  [2024-11-15T10:30:28.582Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:03.084 00:07:03.084 10:30:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:03.084 10:30:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:03.084 10:30:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:03.084 10:30:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.084 { 00:07:03.084 "subsystems": [ 00:07:03.084 { 00:07:03.084 "subsystem": "bdev", 00:07:03.084 "config": [ 00:07:03.084 { 00:07:03.084 "params": { 00:07:03.084 "trtype": "pcie", 00:07:03.084 "traddr": "0000:00:10.0", 00:07:03.084 "name": "Nvme0" 00:07:03.084 }, 00:07:03.084 "method": "bdev_nvme_attach_controller" 00:07:03.084 }, 00:07:03.084 { 00:07:03.084 "method": "bdev_wait_for_examine" 00:07:03.084 } 00:07:03.084 ] 00:07:03.084 } 00:07:03.084 ] 00:07:03.084 } 00:07:03.084 [2024-11-15 10:30:28.468251] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:03.084 [2024-11-15 10:30:28.468360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59761 ] 00:07:03.343 [2024-11-15 10:30:28.618855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.343 [2024-11-15 10:30:28.682549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.343 [2024-11-15 10:30:28.738180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.601  [2024-11-15T10:30:29.099Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:03.601 00:07:03.601 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:03.601 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:03.601 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:03.601 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:03.601 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:03.601 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:03.601 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:03.601 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:03.601 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:03.601 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:03.601 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.859 [2024-11-15 10:30:29.141746] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:03.859 [2024-11-15 10:30:29.142074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59771 ] 00:07:03.859 { 00:07:03.859 "subsystems": [ 00:07:03.859 { 00:07:03.859 "subsystem": "bdev", 00:07:03.859 "config": [ 00:07:03.859 { 00:07:03.859 "params": { 00:07:03.859 "trtype": "pcie", 00:07:03.859 "traddr": "0000:00:10.0", 00:07:03.859 "name": "Nvme0" 00:07:03.859 }, 00:07:03.859 "method": "bdev_nvme_attach_controller" 00:07:03.859 }, 00:07:03.859 { 00:07:03.859 "method": "bdev_wait_for_examine" 00:07:03.859 } 00:07:03.859 ] 00:07:03.859 } 00:07:03.859 ] 00:07:03.859 } 00:07:03.859 [2024-11-15 10:30:29.293120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.117 [2024-11-15 10:30:29.357027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.117 [2024-11-15 10:30:29.412606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.117  [2024-11-15T10:30:29.873Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:04.375 00:07:04.375 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:04.375 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:04.375 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:04.375 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:04.375 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:04.375 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:04.375 10:30:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.941 10:30:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:04.941 10:30:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:04.941 10:30:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.941 10:30:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:05.201 { 00:07:05.201 "subsystems": [ 00:07:05.201 { 00:07:05.201 "subsystem": "bdev", 00:07:05.201 "config": [ 00:07:05.201 { 00:07:05.201 "params": { 00:07:05.201 "trtype": "pcie", 00:07:05.201 "traddr": "0000:00:10.0", 00:07:05.201 "name": "Nvme0" 00:07:05.201 }, 00:07:05.201 "method": "bdev_nvme_attach_controller" 00:07:05.201 }, 00:07:05.201 { 00:07:05.201 "method": "bdev_wait_for_examine" 00:07:05.201 } 00:07:05.201 ] 00:07:05.201 } 00:07:05.201 ] 00:07:05.201 } 00:07:05.201 [2024-11-15 10:30:30.463420] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:05.201 [2024-11-15 10:30:30.463553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59801 ] 00:07:05.201 [2024-11-15 10:30:30.616426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.201 [2024-11-15 10:30:30.685949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.462 [2024-11-15 10:30:30.743295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.462  [2024-11-15T10:30:31.218Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:05.720 00:07:05.720 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:05.720 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:05.720 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:05.720 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:05.720 { 00:07:05.720 "subsystems": [ 00:07:05.720 { 00:07:05.720 "subsystem": "bdev", 00:07:05.720 "config": [ 00:07:05.720 { 00:07:05.720 "params": { 00:07:05.720 "trtype": "pcie", 00:07:05.720 "traddr": "0000:00:10.0", 00:07:05.720 "name": "Nvme0" 00:07:05.720 }, 00:07:05.720 "method": "bdev_nvme_attach_controller" 00:07:05.720 }, 00:07:05.720 { 00:07:05.720 "method": "bdev_wait_for_examine" 00:07:05.720 } 00:07:05.720 ] 00:07:05.720 } 00:07:05.720 ] 00:07:05.720 } 00:07:05.720 [2024-11-15 10:30:31.113387] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:05.720 [2024-11-15 10:30:31.113508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59809 ] 00:07:05.978 [2024-11-15 10:30:31.264974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.978 [2024-11-15 10:30:31.335375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.978 [2024-11-15 10:30:31.393855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.236  [2024-11-15T10:30:31.734Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:06.236 00:07:06.236 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:06.236 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:06.236 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:06.236 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:06.236 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:06.236 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:06.236 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:06.236 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:06.236 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:06.236 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.236 10:30:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.495 { 00:07:06.495 "subsystems": [ 00:07:06.495 { 00:07:06.495 "subsystem": "bdev", 00:07:06.495 "config": [ 00:07:06.495 { 00:07:06.495 "params": { 00:07:06.495 "trtype": "pcie", 00:07:06.495 "traddr": "0000:00:10.0", 00:07:06.495 "name": "Nvme0" 00:07:06.495 }, 00:07:06.495 "method": "bdev_nvme_attach_controller" 00:07:06.495 }, 00:07:06.495 { 00:07:06.495 "method": "bdev_wait_for_examine" 00:07:06.495 } 00:07:06.495 ] 00:07:06.495 } 00:07:06.495 ] 00:07:06.495 } 00:07:06.495 [2024-11-15 10:30:31.803357] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:06.495 [2024-11-15 10:30:31.803808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59830 ] 00:07:06.495 [2024-11-15 10:30:31.961209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.753 [2024-11-15 10:30:32.023972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.753 [2024-11-15 10:30:32.077962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.753  [2024-11-15T10:30:32.509Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:07.011 00:07:07.011 10:30:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:07.011 10:30:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:07.011 10:30:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:07.011 10:30:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:07.011 10:30:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:07.011 10:30:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:07.011 10:30:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:07.011 10:30:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:07.577 10:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:07.577 10:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:07.577 10:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:07.577 10:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:07.834 { 00:07:07.834 "subsystems": [ 00:07:07.834 { 00:07:07.834 "subsystem": "bdev", 00:07:07.834 "config": [ 00:07:07.834 { 00:07:07.834 "params": { 00:07:07.834 "trtype": "pcie", 00:07:07.834 "traddr": "0000:00:10.0", 00:07:07.834 "name": "Nvme0" 00:07:07.834 }, 00:07:07.834 "method": "bdev_nvme_attach_controller" 00:07:07.834 }, 00:07:07.834 { 00:07:07.834 "method": "bdev_wait_for_examine" 00:07:07.834 } 00:07:07.834 ] 00:07:07.834 } 00:07:07.834 ] 00:07:07.834 } 00:07:07.834 [2024-11-15 10:30:33.089551] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:07.834 [2024-11-15 10:30:33.089891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59849 ] 00:07:07.834 [2024-11-15 10:30:33.250365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.834 [2024-11-15 10:30:33.323549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.093 [2024-11-15 10:30:33.384747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.093  [2024-11-15T10:30:33.851Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:08.353 00:07:08.353 10:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:08.353 10:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:08.353 10:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:08.353 10:30:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.353 { 00:07:08.353 "subsystems": [ 00:07:08.353 { 00:07:08.353 "subsystem": "bdev", 00:07:08.353 "config": [ 00:07:08.353 { 00:07:08.353 "params": { 00:07:08.353 "trtype": "pcie", 00:07:08.353 "traddr": "0000:00:10.0", 00:07:08.353 "name": "Nvme0" 00:07:08.353 }, 00:07:08.353 "method": "bdev_nvme_attach_controller" 00:07:08.353 }, 00:07:08.353 { 00:07:08.353 "method": "bdev_wait_for_examine" 00:07:08.353 } 00:07:08.353 ] 00:07:08.353 } 00:07:08.353 ] 00:07:08.353 } 00:07:08.353 [2024-11-15 10:30:33.764386] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:08.353 [2024-11-15 10:30:33.764550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59868 ] 00:07:08.610 [2024-11-15 10:30:33.917137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.610 [2024-11-15 10:30:33.980164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.610 [2024-11-15 10:30:34.035173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.868  [2024-11-15T10:30:34.366Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:08.868 00:07:08.868 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.868 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:08.868 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:08.868 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:08.868 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:08.868 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:08.868 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:08.868 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:08.868 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:08.868 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:08.868 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.127 [2024-11-15 10:30:34.402236] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:09.127 [2024-11-15 10:30:34.402327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59888 ] 00:07:09.127 { 00:07:09.127 "subsystems": [ 00:07:09.127 { 00:07:09.127 "subsystem": "bdev", 00:07:09.127 "config": [ 00:07:09.127 { 00:07:09.127 "params": { 00:07:09.127 "trtype": "pcie", 00:07:09.127 "traddr": "0000:00:10.0", 00:07:09.127 "name": "Nvme0" 00:07:09.127 }, 00:07:09.127 "method": "bdev_nvme_attach_controller" 00:07:09.127 }, 00:07:09.127 { 00:07:09.127 "method": "bdev_wait_for_examine" 00:07:09.127 } 00:07:09.127 ] 00:07:09.127 } 00:07:09.127 ] 00:07:09.127 } 00:07:09.127 [2024-11-15 10:30:34.545479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.127 [2024-11-15 10:30:34.610666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.454 [2024-11-15 10:30:34.665604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.454  [2024-11-15T10:30:35.209Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:09.711 00:07:09.711 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:09.711 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:09.711 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:09.711 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:09.711 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:09.711 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:09.712 10:30:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.278 10:30:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:10.278 10:30:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:10.278 10:30:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.278 10:30:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.278 [2024-11-15 10:30:35.614991] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:10.278 [2024-11-15 10:30:35.615143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59908 ] 00:07:10.278 { 00:07:10.278 "subsystems": [ 00:07:10.278 { 00:07:10.278 "subsystem": "bdev", 00:07:10.278 "config": [ 00:07:10.278 { 00:07:10.278 "params": { 00:07:10.278 "trtype": "pcie", 00:07:10.278 "traddr": "0000:00:10.0", 00:07:10.278 "name": "Nvme0" 00:07:10.278 }, 00:07:10.278 "method": "bdev_nvme_attach_controller" 00:07:10.278 }, 00:07:10.278 { 00:07:10.278 "method": "bdev_wait_for_examine" 00:07:10.278 } 00:07:10.278 ] 00:07:10.278 } 00:07:10.278 ] 00:07:10.278 } 00:07:10.278 [2024-11-15 10:30:35.764290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.537 [2024-11-15 10:30:35.829647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.537 [2024-11-15 10:30:35.885118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.537  [2024-11-15T10:30:36.294Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:10.796 00:07:10.796 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:10.796 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:10.796 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.796 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.796 { 00:07:10.796 "subsystems": [ 00:07:10.796 { 00:07:10.796 "subsystem": "bdev", 00:07:10.796 "config": [ 00:07:10.796 { 00:07:10.796 "params": { 00:07:10.796 "trtype": "pcie", 00:07:10.796 "traddr": "0000:00:10.0", 00:07:10.796 "name": "Nvme0" 00:07:10.796 }, 00:07:10.796 "method": "bdev_nvme_attach_controller" 00:07:10.796 }, 00:07:10.796 { 00:07:10.796 "method": "bdev_wait_for_examine" 00:07:10.796 } 00:07:10.796 ] 00:07:10.796 } 00:07:10.796 ] 00:07:10.796 } 00:07:10.796 [2024-11-15 10:30:36.265236] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:10.796 [2024-11-15 10:30:36.265709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59916 ] 00:07:11.055 [2024-11-15 10:30:36.418851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.055 [2024-11-15 10:30:36.482479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.055 [2024-11-15 10:30:36.537733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.313  [2024-11-15T10:30:37.069Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:11.571 00:07:11.571 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:11.571 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:11.571 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:11.571 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:11.571 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:11.571 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:11.571 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:11.571 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:11.571 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:11.571 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.571 10:30:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.572 [2024-11-15 10:30:36.904798] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:11.572 [2024-11-15 10:30:36.904893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59937 ] 00:07:11.572 { 00:07:11.572 "subsystems": [ 00:07:11.572 { 00:07:11.572 "subsystem": "bdev", 00:07:11.572 "config": [ 00:07:11.572 { 00:07:11.572 "params": { 00:07:11.572 "trtype": "pcie", 00:07:11.572 "traddr": "0000:00:10.0", 00:07:11.572 "name": "Nvme0" 00:07:11.572 }, 00:07:11.572 "method": "bdev_nvme_attach_controller" 00:07:11.572 }, 00:07:11.572 { 00:07:11.572 "method": "bdev_wait_for_examine" 00:07:11.572 } 00:07:11.572 ] 00:07:11.572 } 00:07:11.572 ] 00:07:11.572 } 00:07:11.572 [2024-11-15 10:30:37.049322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.830 [2024-11-15 10:30:37.112583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.830 [2024-11-15 10:30:37.167603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.830  [2024-11-15T10:30:37.586Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:12.088 00:07:12.088 10:30:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:12.088 10:30:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:12.088 10:30:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:12.088 10:30:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:12.088 10:30:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:12.088 10:30:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:12.088 10:30:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:12.088 10:30:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.654 10:30:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:12.654 10:30:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:12.654 10:30:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:12.654 10:30:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.655 [2024-11-15 10:30:38.058853] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:12.655 [2024-11-15 10:30:38.058968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59956 ] 00:07:12.655 { 00:07:12.655 "subsystems": [ 00:07:12.655 { 00:07:12.655 "subsystem": "bdev", 00:07:12.655 "config": [ 00:07:12.655 { 00:07:12.655 "params": { 00:07:12.655 "trtype": "pcie", 00:07:12.655 "traddr": "0000:00:10.0", 00:07:12.655 "name": "Nvme0" 00:07:12.655 }, 00:07:12.655 "method": "bdev_nvme_attach_controller" 00:07:12.655 }, 00:07:12.655 { 00:07:12.655 "method": "bdev_wait_for_examine" 00:07:12.655 } 00:07:12.655 ] 00:07:12.655 } 00:07:12.655 ] 00:07:12.655 } 00:07:12.913 [2024-11-15 10:30:38.208757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.913 [2024-11-15 10:30:38.274543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.913 [2024-11-15 10:30:38.330363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.225  [2024-11-15T10:30:38.723Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:13.225 00:07:13.225 10:30:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:13.225 10:30:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:13.225 10:30:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:13.225 10:30:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.225 [2024-11-15 10:30:38.695043] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:13.225 [2024-11-15 10:30:38.695770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59975 ] 00:07:13.484 { 00:07:13.484 "subsystems": [ 00:07:13.484 { 00:07:13.484 "subsystem": "bdev", 00:07:13.484 "config": [ 00:07:13.484 { 00:07:13.484 "params": { 00:07:13.484 "trtype": "pcie", 00:07:13.484 "traddr": "0000:00:10.0", 00:07:13.484 "name": "Nvme0" 00:07:13.484 }, 00:07:13.484 "method": "bdev_nvme_attach_controller" 00:07:13.484 }, 00:07:13.484 { 00:07:13.484 "method": "bdev_wait_for_examine" 00:07:13.484 } 00:07:13.484 ] 00:07:13.484 } 00:07:13.484 ] 00:07:13.484 } 00:07:13.484 [2024-11-15 10:30:38.840083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.484 [2024-11-15 10:30:38.906659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.484 [2024-11-15 10:30:38.963746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.743  [2024-11-15T10:30:39.499Z] Copying: 48/48 [kB] (average 23 MBps) 00:07:14.001 00:07:14.001 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.001 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:14.001 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:14.001 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:14.001 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:14.001 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:14.001 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:14.001 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:14.001 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:14.001 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:14.001 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.001 [2024-11-15 10:30:39.334068] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:14.001 [2024-11-15 10:30:39.334394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59985 ] 00:07:14.001 { 00:07:14.001 "subsystems": [ 00:07:14.001 { 00:07:14.001 "subsystem": "bdev", 00:07:14.001 "config": [ 00:07:14.001 { 00:07:14.001 "params": { 00:07:14.001 "trtype": "pcie", 00:07:14.001 "traddr": "0000:00:10.0", 00:07:14.002 "name": "Nvme0" 00:07:14.002 }, 00:07:14.002 "method": "bdev_nvme_attach_controller" 00:07:14.002 }, 00:07:14.002 { 00:07:14.002 "method": "bdev_wait_for_examine" 00:07:14.002 } 00:07:14.002 ] 00:07:14.002 } 00:07:14.002 ] 00:07:14.002 } 00:07:14.002 [2024-11-15 10:30:39.477093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.259 [2024-11-15 10:30:39.541914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.259 [2024-11-15 10:30:39.599869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.259  [2024-11-15T10:30:40.015Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:14.517 00:07:14.517 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:14.517 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:14.517 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:14.517 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:14.517 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:14.517 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:14.517 10:30:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.085 10:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:15.085 10:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:15.085 10:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.085 10:30:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.085 [2024-11-15 10:30:40.477541] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:15.085 [2024-11-15 10:30:40.477921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60012 ] 00:07:15.085 { 00:07:15.085 "subsystems": [ 00:07:15.085 { 00:07:15.085 "subsystem": "bdev", 00:07:15.085 "config": [ 00:07:15.085 { 00:07:15.085 "params": { 00:07:15.085 "trtype": "pcie", 00:07:15.085 "traddr": "0000:00:10.0", 00:07:15.085 "name": "Nvme0" 00:07:15.085 }, 00:07:15.085 "method": "bdev_nvme_attach_controller" 00:07:15.085 }, 00:07:15.085 { 00:07:15.085 "method": "bdev_wait_for_examine" 00:07:15.085 } 00:07:15.085 ] 00:07:15.085 } 00:07:15.085 ] 00:07:15.085 } 00:07:15.344 [2024-11-15 10:30:40.627232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.344 [2024-11-15 10:30:40.692576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.344 [2024-11-15 10:30:40.749235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.602  [2024-11-15T10:30:41.100Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:15.602 00:07:15.602 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:15.602 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:15.602 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.602 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.860 { 00:07:15.860 "subsystems": [ 00:07:15.860 { 00:07:15.860 "subsystem": "bdev", 00:07:15.860 "config": [ 00:07:15.860 { 00:07:15.861 "params": { 00:07:15.861 "trtype": "pcie", 00:07:15.861 "traddr": "0000:00:10.0", 00:07:15.861 "name": "Nvme0" 00:07:15.861 }, 00:07:15.861 "method": "bdev_nvme_attach_controller" 00:07:15.861 }, 00:07:15.861 { 00:07:15.861 "method": "bdev_wait_for_examine" 00:07:15.861 } 00:07:15.861 ] 00:07:15.861 } 00:07:15.861 ] 00:07:15.861 } 00:07:15.861 [2024-11-15 10:30:41.119827] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:15.861 [2024-11-15 10:30:41.119937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60025 ] 00:07:15.861 [2024-11-15 10:30:41.268266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.861 [2024-11-15 10:30:41.331793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.119 [2024-11-15 10:30:41.387350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.119  [2024-11-15T10:30:41.876Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:16.378 00:07:16.378 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.378 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:16.378 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:16.378 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:16.378 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:16.378 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:16.378 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:16.378 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:16.378 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:16.378 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:16.378 10:30:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.378 { 00:07:16.378 "subsystems": [ 00:07:16.378 { 00:07:16.378 "subsystem": "bdev", 00:07:16.378 "config": [ 00:07:16.378 { 00:07:16.378 "params": { 00:07:16.378 "trtype": "pcie", 00:07:16.378 "traddr": "0000:00:10.0", 00:07:16.378 "name": "Nvme0" 00:07:16.378 }, 00:07:16.378 "method": "bdev_nvme_attach_controller" 00:07:16.378 }, 00:07:16.378 { 00:07:16.378 "method": "bdev_wait_for_examine" 00:07:16.378 } 00:07:16.378 ] 00:07:16.378 } 00:07:16.378 ] 00:07:16.378 } 00:07:16.378 [2024-11-15 10:30:41.767211] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:16.378 [2024-11-15 10:30:41.767313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60041 ] 00:07:16.636 [2024-11-15 10:30:41.915145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.636 [2024-11-15 10:30:41.978756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.636 [2024-11-15 10:30:42.033594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.893  [2024-11-15T10:30:42.391Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:16.893 00:07:16.893 00:07:16.893 real 0m15.234s 00:07:16.893 user 0m11.248s 00:07:16.893 sys 0m5.546s 00:07:16.893 10:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:16.893 ************************************ 00:07:16.893 END TEST dd_rw 00:07:16.893 ************************************ 00:07:16.893 10:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.150 ************************************ 00:07:17.150 START TEST dd_rw_offset 00:07:17.150 ************************************ 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1127 -- # basic_offset 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=vtk10vu8v17l3jgvxqv66y0gu5qqwvm3mc1tui84i5t1py1bfy9czjvodc5fkrekm1bdijdg0t91w7a46x0we7wjcpipaqkwm8x558r19p4gghh3c94d4jhcm2ougohpc3q5cal40ygqtvreqqvoh13ti9u347g387pk5y4gona4au9vr8cg9a27xk61cz00xhzeyi3dp1wu45tt5nrofwicgstx2vrlmbmo4iroupl1l9pmkqt7njtt6466jxwpmjngsdql87nzfdlj9tfzrbzdue6x7d1wzgehjhu417anw8aw1qxft84h1qvc86mkjgj2y8puzi36is5n1yt4h77wf46ft4k5zik5izekmni6i6i7cyrdkbqft7x7wkljhc15ua61jo9ne20ho2gjq4zrouo23na99s8df8wmbtvu5qs4qqu6t1x1s656buowilx2fn3fx7sfigv5wsq8dsva1vb60s836giwpp4x308q60xcmub24c2ln2jt03hqqcnvhnegjipbbn5uwe0j49psq3laxfqtwqcii6auo3x6jzamueh9kezvp1y1oorvr0tmtwqantdqi282cwhv07b1rcoif3ni9b1w1g7qi3mxtt01skma3i7ncup68l9exupn445uennb68a1axyukmsqccislizbscm4idr7xmc3baixvvdbbwbvwjiuepbgyh3qnbao6fy1meeqrc1jworegrizolgu1uuiwtwlzre0dtypg0pm98te38esgjzclz58m128nhvf3r6bph4uailwpdmdvr4o9mphlkw6i3pkr48ka1f8fdgg4tzmoi9qskthx5bafs2b429c957k0x7w6n7ev3reiu4ws26nmboy6i4s9kt315o1jatwza242xd35z7i5iq6z3o6yq3r7wyn32xucs1rxkypq53tw00mt3nfpj1b3bt9598kx52q7o1xvnnx8ch0kunpo7vxuzjbo19js6dsw5f6vj5yy8kl1khaa5yphwlsc23gwsth9slvggev9dbkubbq5qvjbg7td6bkloc1fyohhii1qa1a2q1jow9x2sx2gbf10plyl4nafjkme7ndi5017oe6x5v1u48ko42ryxow9vvko1x0ae2sri712zyuc26xgxfvepha9ltt1iwe5r00r1jd7e1w7fb6zkd2qfpohk8nwg732cebadhj9w9ehiqczj39sm9ezvz28k6ik2o9dbmm2mgh4nsn0jdi8k0wi7e0rbfqz0f5dpw6vanp8wr9x0y65mkqvjpvf0c0p8rkofdom2h1m5frkbr4ubz8doqkto8cxawsuxa8kflk4rzivab8talkcosngw5cozxdpy49ru5za8yozob4y7g7z5ffjdaovi60ew8xz2wgrr5c4ekowkkbjkq2gqalr7i9xpicq2suafw9ihzq4yrd0jpcui2imofbyb7i33hyqynjz253emvfpw032qnpk61v3uaru38xbnysvlbrnhy1p45lyapcvngd500zo730lq8nc34jonmcifckemvkewcerxx73wxx4q2nrckkuhhwbr70o3idl10rgjb9auku0qksv2102vepmuciv0dj9zxe4zilhvqcqz0vufo0isx1jv4wul6tl8kk0hdpaqanl26x667xazpupewx7wpe7lfjrkha6g6wb07trfd3ybet39bey8gnopugiqlaldxi4dl2p95vohwk8ya0uj8u272b7smxob04d82imhr9jmjd3dy6yqh7h7ldoilaldrxyo3r2fa9yd3pjwshmp48r3n52ta3nvv6v1kxu8k3goa746vmhxtc2wvhopt68joqukiiixqrr0nqthuymw34f7miob856q13je1860vauumkupn8n6ctxgdhd03jetof8e7bl71ilmv6gbvqnqweeplgvwtwbrey176g6n5798qztxu6utpimdh15c46bb515nh8ea1e33j2utk9ojiwm2o122313fnnwuuujq56y8yh61oqtlwtj0bt85uyluwi63qhhoo2m0tfeb9m5wumt7ghmsvfo60wnt8za2a9uzn77zjbu2jwg3xjruonid4tt2p5vwhjwj0e8k50tqwbb6uqv3cn21plop5803p0mz36qrehrcocohgolcwm2nxsk0rc2glhwnz3iodzsx91wwlfn9h7fn6g26z4or4319tqbqsh1odsje0zftgu77xd8hgjao92xev7fucgq9d8hm7k12r4bbht2sgcjp0aogpwfiqwufpeqea1if399vxl9xw37fx8wy94i3nrszwhw8jqwi59hiiij33a505gdgdpzkyl9j3bthqx4ruzv7jgtdnbi8rkpzcpipyoa0eaamboerlzi9247d94etw8j6nok88d61p3ckg07ail3s5pfermkm21q51fdrrgmegu54ra99p6rxfq0q9wggtflbxi977lxq9dbnt8lk8ll2no9keq2z0s5h3jbr6s9w80z48uvajq835grnvjc7gkkw6kysw5d74u273pwhjzk5uudhpqa4n348wfdy591h73d71klt3zqz3skugh87myliw6ddjvgo19i66mbn8ssyvqi6d340ivefxh6323u6lbflvx90fnpp4b9ccmjvkygyzqq7k9z75gpqe0z3k26cbc54wkifr3vxf9b3zlpeo5b0gp8r7gt42u02s0475kigitkn5qgx689d4e8ksfh0g2pujqa3176ezq0syxku54u70xa4fwv8g6agg8sm7kor7p2gkvcwba32z28o56t4thymco2d7hh9zpi060ge9n2d3uoz7sftb4lssepqcc4jpx0gpseeq0hue6o5ojru89fmff3c1lhj1a8s3lexa0nqjhqw8m93bt5el3rv1ilr527zh4h10k4euej9jw0fo9cbclvbnd7gx2c1wvr5f00b2dr8fpgp7xqv5rodwokou64rkwd38k02r07bypoo3ai4xguwzggds4qoqrl7j2xwxd54c30ewjd5te48bg3h2sul83fz5xuaqzhddzrmt452khvm9nsuuopyc2v2eow8ddkoupffk7tobjokmbnvpf2w03ltda7m8d11ag7e2ob26aygzpdm20cf3x106z4nlubx1nnn49opa0tlm9u6w72y94lfj0703fa3rz5gr921mrvxxr52btteqf49t5nyxr675pa0qonomoqsrakloy1gxzhvs67odhtd4sjwwzyhy614n91kpy184d3j2busxhcuynvczxouw3ck9mewafjointlirsoc7m3ixshhqz380bplmangprkurr2wr8drlgdju6js4progq1gh7mrvntykx4uucx3r66zgmg2fecfzgfjz8gaxr5ibefuki94czuiq8yhfof6gkwo7j7by2ni3709yi98tpimp8vbvhkevvwv09vtcdz5god4b582cx9nlvc2xcf27hsbggc984hg37rvbqtuvkg4sd9ekalmmczlujppbzxdvs1sa1ptldh79elh8tjtq166tfltfxg9uckn25qck0g7mj8lwr6xnd5zolu1fxhb5p2pp5y801jmgiq1ut4p45c92jpryc1p80pp5lx5rcnbfiqrtvq7i0hov0c9qubod9dkxu8bvv9z7px8n1zvwr20t7g0wu9dltiwff4kcr4fq576jpupolejgjidwyoindqcepc85xiej8uf12xiujijze0sagny9srxe2kqy2b4wqx2t2ms8x5bomxo8emzofcz2abj5by2v16es8igalzo7qo01wkc1tbwwfhiz4hoxb2jtfh56w630y05556mpjr6402elasg1l42n84178zdieu8qeedtcteioncmpp343rxjptm5hle5bh3kmxle6jro3kvv1qs5nl8zky9ii4v2p7uywkhi8xn3ec8ualx2y98de9iz2766btgvw7r6wgu9qc0jedb722jt2p3rhzdk16kty38lvajftdmnt7qg9waioqfpjfc3ebxh4qsuy0o49n79m34cxs05h1tjifqei7sfh7iokv8qco59qvszww0srwttpder 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:17.150 10:30:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:17.150 [2024-11-15 10:30:42.509808] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:17.150 [2024-11-15 10:30:42.510167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60077 ] 00:07:17.150 { 00:07:17.150 "subsystems": [ 00:07:17.150 { 00:07:17.150 "subsystem": "bdev", 00:07:17.150 "config": [ 00:07:17.150 { 00:07:17.150 "params": { 00:07:17.150 "trtype": "pcie", 00:07:17.150 "traddr": "0000:00:10.0", 00:07:17.150 "name": "Nvme0" 00:07:17.150 }, 00:07:17.150 "method": "bdev_nvme_attach_controller" 00:07:17.150 }, 00:07:17.150 { 00:07:17.150 "method": "bdev_wait_for_examine" 00:07:17.150 } 00:07:17.150 ] 00:07:17.150 } 00:07:17.150 ] 00:07:17.150 } 00:07:17.409 [2024-11-15 10:30:42.657467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.409 [2024-11-15 10:30:42.723099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.409 [2024-11-15 10:30:42.779594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.409  [2024-11-15T10:30:43.165Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:17.667 00:07:17.667 10:30:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:17.667 10:30:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:17.667 10:30:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:17.667 10:30:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:17.667 { 00:07:17.667 "subsystems": [ 00:07:17.667 { 00:07:17.667 "subsystem": "bdev", 00:07:17.667 "config": [ 00:07:17.667 { 00:07:17.667 "params": { 00:07:17.667 "trtype": "pcie", 00:07:17.667 "traddr": "0000:00:10.0", 00:07:17.667 "name": "Nvme0" 00:07:17.667 }, 00:07:17.667 "method": "bdev_nvme_attach_controller" 00:07:17.667 }, 00:07:17.667 { 00:07:17.667 "method": "bdev_wait_for_examine" 00:07:17.667 } 00:07:17.667 ] 00:07:17.667 } 00:07:17.667 ] 00:07:17.667 } 00:07:17.667 [2024-11-15 10:30:43.151452] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:17.667 [2024-11-15 10:30:43.151586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60090 ] 00:07:17.925 [2024-11-15 10:30:43.301282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.925 [2024-11-15 10:30:43.371833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.183 [2024-11-15 10:30:43.431139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.183  [2024-11-15T10:30:43.940Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:18.442 00:07:18.442 10:30:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:18.442 10:30:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ vtk10vu8v17l3jgvxqv66y0gu5qqwvm3mc1tui84i5t1py1bfy9czjvodc5fkrekm1bdijdg0t91w7a46x0we7wjcpipaqkwm8x558r19p4gghh3c94d4jhcm2ougohpc3q5cal40ygqtvreqqvoh13ti9u347g387pk5y4gona4au9vr8cg9a27xk61cz00xhzeyi3dp1wu45tt5nrofwicgstx2vrlmbmo4iroupl1l9pmkqt7njtt6466jxwpmjngsdql87nzfdlj9tfzrbzdue6x7d1wzgehjhu417anw8aw1qxft84h1qvc86mkjgj2y8puzi36is5n1yt4h77wf46ft4k5zik5izekmni6i6i7cyrdkbqft7x7wkljhc15ua61jo9ne20ho2gjq4zrouo23na99s8df8wmbtvu5qs4qqu6t1x1s656buowilx2fn3fx7sfigv5wsq8dsva1vb60s836giwpp4x308q60xcmub24c2ln2jt03hqqcnvhnegjipbbn5uwe0j49psq3laxfqtwqcii6auo3x6jzamueh9kezvp1y1oorvr0tmtwqantdqi282cwhv07b1rcoif3ni9b1w1g7qi3mxtt01skma3i7ncup68l9exupn445uennb68a1axyukmsqccislizbscm4idr7xmc3baixvvdbbwbvwjiuepbgyh3qnbao6fy1meeqrc1jworegrizolgu1uuiwtwlzre0dtypg0pm98te38esgjzclz58m128nhvf3r6bph4uailwpdmdvr4o9mphlkw6i3pkr48ka1f8fdgg4tzmoi9qskthx5bafs2b429c957k0x7w6n7ev3reiu4ws26nmboy6i4s9kt315o1jatwza242xd35z7i5iq6z3o6yq3r7wyn32xucs1rxkypq53tw00mt3nfpj1b3bt9598kx52q7o1xvnnx8ch0kunpo7vxuzjbo19js6dsw5f6vj5yy8kl1khaa5yphwlsc23gwsth9slvggev9dbkubbq5qvjbg7td6bkloc1fyohhii1qa1a2q1jow9x2sx2gbf10plyl4nafjkme7ndi5017oe6x5v1u48ko42ryxow9vvko1x0ae2sri712zyuc26xgxfvepha9ltt1iwe5r00r1jd7e1w7fb6zkd2qfpohk8nwg732cebadhj9w9ehiqczj39sm9ezvz28k6ik2o9dbmm2mgh4nsn0jdi8k0wi7e0rbfqz0f5dpw6vanp8wr9x0y65mkqvjpvf0c0p8rkofdom2h1m5frkbr4ubz8doqkto8cxawsuxa8kflk4rzivab8talkcosngw5cozxdpy49ru5za8yozob4y7g7z5ffjdaovi60ew8xz2wgrr5c4ekowkkbjkq2gqalr7i9xpicq2suafw9ihzq4yrd0jpcui2imofbyb7i33hyqynjz253emvfpw032qnpk61v3uaru38xbnysvlbrnhy1p45lyapcvngd500zo730lq8nc34jonmcifckemvkewcerxx73wxx4q2nrckkuhhwbr70o3idl10rgjb9auku0qksv2102vepmuciv0dj9zxe4zilhvqcqz0vufo0isx1jv4wul6tl8kk0hdpaqanl26x667xazpupewx7wpe7lfjrkha6g6wb07trfd3ybet39bey8gnopugiqlaldxi4dl2p95vohwk8ya0uj8u272b7smxob04d82imhr9jmjd3dy6yqh7h7ldoilaldrxyo3r2fa9yd3pjwshmp48r3n52ta3nvv6v1kxu8k3goa746vmhxtc2wvhopt68joqukiiixqrr0nqthuymw34f7miob856q13je1860vauumkupn8n6ctxgdhd03jetof8e7bl71ilmv6gbvqnqweeplgvwtwbrey176g6n5798qztxu6utpimdh15c46bb515nh8ea1e33j2utk9ojiwm2o122313fnnwuuujq56y8yh61oqtlwtj0bt85uyluwi63qhhoo2m0tfeb9m5wumt7ghmsvfo60wnt8za2a9uzn77zjbu2jwg3xjruonid4tt2p5vwhjwj0e8k50tqwbb6uqv3cn21plop5803p0mz36qrehrcocohgolcwm2nxsk0rc2glhwnz3iodzsx91wwlfn9h7fn6g26z4or4319tqbqsh1odsje0zftgu77xd8hgjao92xev7fucgq9d8hm7k12r4bbht2sgcjp0aogpwfiqwufpeqea1if399vxl9xw37fx8wy94i3nrszwhw8jqwi59hiiij33a505gdgdpzkyl9j3bthqx4ruzv7jgtdnbi8rkpzcpipyoa0eaamboerlzi9247d94etw8j6nok88d61p3ckg07ail3s5pfermkm21q51fdrrgmegu54ra99p6rxfq0q9wggtflbxi977lxq9dbnt8lk8ll2no9keq2z0s5h3jbr6s9w80z48uvajq835grnvjc7gkkw6kysw5d74u273pwhjzk5uudhpqa4n348wfdy591h73d71klt3zqz3skugh87myliw6ddjvgo19i66mbn8ssyvqi6d340ivefxh6323u6lbflvx90fnpp4b9ccmjvkygyzqq7k9z75gpqe0z3k26cbc54wkifr3vxf9b3zlpeo5b0gp8r7gt42u02s0475kigitkn5qgx689d4e8ksfh0g2pujqa3176ezq0syxku54u70xa4fwv8g6agg8sm7kor7p2gkvcwba32z28o56t4thymco2d7hh9zpi060ge9n2d3uoz7sftb4lssepqcc4jpx0gpseeq0hue6o5ojru89fmff3c1lhj1a8s3lexa0nqjhqw8m93bt5el3rv1ilr527zh4h10k4euej9jw0fo9cbclvbnd7gx2c1wvr5f00b2dr8fpgp7xqv5rodwokou64rkwd38k02r07bypoo3ai4xguwzggds4qoqrl7j2xwxd54c30ewjd5te48bg3h2sul83fz5xuaqzhddzrmt452khvm9nsuuopyc2v2eow8ddkoupffk7tobjokmbnvpf2w03ltda7m8d11ag7e2ob26aygzpdm20cf3x106z4nlubx1nnn49opa0tlm9u6w72y94lfj0703fa3rz5gr921mrvxxr52btteqf49t5nyxr675pa0qonomoqsrakloy1gxzhvs67odhtd4sjwwzyhy614n91kpy184d3j2busxhcuynvczxouw3ck9mewafjointlirsoc7m3ixshhqz380bplmangprkurr2wr8drlgdju6js4progq1gh7mrvntykx4uucx3r66zgmg2fecfzgfjz8gaxr5ibefuki94czuiq8yhfof6gkwo7j7by2ni3709yi98tpimp8vbvhkevvwv09vtcdz5god4b582cx9nlvc2xcf27hsbggc984hg37rvbqtuvkg4sd9ekalmmczlujppbzxdvs1sa1ptldh79elh8tjtq166tfltfxg9uckn25qck0g7mj8lwr6xnd5zolu1fxhb5p2pp5y801jmgiq1ut4p45c92jpryc1p80pp5lx5rcnbfiqrtvq7i0hov0c9qubod9dkxu8bvv9z7px8n1zvwr20t7g0wu9dltiwff4kcr4fq576jpupolejgjidwyoindqcepc85xiej8uf12xiujijze0sagny9srxe2kqy2b4wqx2t2ms8x5bomxo8emzofcz2abj5by2v16es8igalzo7qo01wkc1tbwwfhiz4hoxb2jtfh56w630y05556mpjr6402elasg1l42n84178zdieu8qeedtcteioncmpp343rxjptm5hle5bh3kmxle6jro3kvv1qs5nl8zky9ii4v2p7uywkhi8xn3ec8ualx2y98de9iz2766btgvw7r6wgu9qc0jedb722jt2p3rhzdk16kty38lvajftdmnt7q************************************ 00:07:18.442 END TEST dd_rw_offset 00:07:18.442 ************************************ 00:07:18.443 g9waioqfpjfc3ebxh4qsuy0o49n79m34cxs05h1tjifqei7sfh7iokv8qco59qvszww0srwttpder == \v\t\k\1\0\v\u\8\v\1\7\l\3\j\g\v\x\q\v\6\6\y\0\g\u\5\q\q\w\v\m\3\m\c\1\t\u\i\8\4\i\5\t\1\p\y\1\b\f\y\9\c\z\j\v\o\d\c\5\f\k\r\e\k\m\1\b\d\i\j\d\g\0\t\9\1\w\7\a\4\6\x\0\w\e\7\w\j\c\p\i\p\a\q\k\w\m\8\x\5\5\8\r\1\9\p\4\g\g\h\h\3\c\9\4\d\4\j\h\c\m\2\o\u\g\o\h\p\c\3\q\5\c\a\l\4\0\y\g\q\t\v\r\e\q\q\v\o\h\1\3\t\i\9\u\3\4\7\g\3\8\7\p\k\5\y\4\g\o\n\a\4\a\u\9\v\r\8\c\g\9\a\2\7\x\k\6\1\c\z\0\0\x\h\z\e\y\i\3\d\p\1\w\u\4\5\t\t\5\n\r\o\f\w\i\c\g\s\t\x\2\v\r\l\m\b\m\o\4\i\r\o\u\p\l\1\l\9\p\m\k\q\t\7\n\j\t\t\6\4\6\6\j\x\w\p\m\j\n\g\s\d\q\l\8\7\n\z\f\d\l\j\9\t\f\z\r\b\z\d\u\e\6\x\7\d\1\w\z\g\e\h\j\h\u\4\1\7\a\n\w\8\a\w\1\q\x\f\t\8\4\h\1\q\v\c\8\6\m\k\j\g\j\2\y\8\p\u\z\i\3\6\i\s\5\n\1\y\t\4\h\7\7\w\f\4\6\f\t\4\k\5\z\i\k\5\i\z\e\k\m\n\i\6\i\6\i\7\c\y\r\d\k\b\q\f\t\7\x\7\w\k\l\j\h\c\1\5\u\a\6\1\j\o\9\n\e\2\0\h\o\2\g\j\q\4\z\r\o\u\o\2\3\n\a\9\9\s\8\d\f\8\w\m\b\t\v\u\5\q\s\4\q\q\u\6\t\1\x\1\s\6\5\6\b\u\o\w\i\l\x\2\f\n\3\f\x\7\s\f\i\g\v\5\w\s\q\8\d\s\v\a\1\v\b\6\0\s\8\3\6\g\i\w\p\p\4\x\3\0\8\q\6\0\x\c\m\u\b\2\4\c\2\l\n\2\j\t\0\3\h\q\q\c\n\v\h\n\e\g\j\i\p\b\b\n\5\u\w\e\0\j\4\9\p\s\q\3\l\a\x\f\q\t\w\q\c\i\i\6\a\u\o\3\x\6\j\z\a\m\u\e\h\9\k\e\z\v\p\1\y\1\o\o\r\v\r\0\t\m\t\w\q\a\n\t\d\q\i\2\8\2\c\w\h\v\0\7\b\1\r\c\o\i\f\3\n\i\9\b\1\w\1\g\7\q\i\3\m\x\t\t\0\1\s\k\m\a\3\i\7\n\c\u\p\6\8\l\9\e\x\u\p\n\4\4\5\u\e\n\n\b\6\8\a\1\a\x\y\u\k\m\s\q\c\c\i\s\l\i\z\b\s\c\m\4\i\d\r\7\x\m\c\3\b\a\i\x\v\v\d\b\b\w\b\v\w\j\i\u\e\p\b\g\y\h\3\q\n\b\a\o\6\f\y\1\m\e\e\q\r\c\1\j\w\o\r\e\g\r\i\z\o\l\g\u\1\u\u\i\w\t\w\l\z\r\e\0\d\t\y\p\g\0\p\m\9\8\t\e\3\8\e\s\g\j\z\c\l\z\5\8\m\1\2\8\n\h\v\f\3\r\6\b\p\h\4\u\a\i\l\w\p\d\m\d\v\r\4\o\9\m\p\h\l\k\w\6\i\3\p\k\r\4\8\k\a\1\f\8\f\d\g\g\4\t\z\m\o\i\9\q\s\k\t\h\x\5\b\a\f\s\2\b\4\2\9\c\9\5\7\k\0\x\7\w\6\n\7\e\v\3\r\e\i\u\4\w\s\2\6\n\m\b\o\y\6\i\4\s\9\k\t\3\1\5\o\1\j\a\t\w\z\a\2\4\2\x\d\3\5\z\7\i\5\i\q\6\z\3\o\6\y\q\3\r\7\w\y\n\3\2\x\u\c\s\1\r\x\k\y\p\q\5\3\t\w\0\0\m\t\3\n\f\p\j\1\b\3\b\t\9\5\9\8\k\x\5\2\q\7\o\1\x\v\n\n\x\8\c\h\0\k\u\n\p\o\7\v\x\u\z\j\b\o\1\9\j\s\6\d\s\w\5\f\6\v\j\5\y\y\8\k\l\1\k\h\a\a\5\y\p\h\w\l\s\c\2\3\g\w\s\t\h\9\s\l\v\g\g\e\v\9\d\b\k\u\b\b\q\5\q\v\j\b\g\7\t\d\6\b\k\l\o\c\1\f\y\o\h\h\i\i\1\q\a\1\a\2\q\1\j\o\w\9\x\2\s\x\2\g\b\f\1\0\p\l\y\l\4\n\a\f\j\k\m\e\7\n\d\i\5\0\1\7\o\e\6\x\5\v\1\u\4\8\k\o\4\2\r\y\x\o\w\9\v\v\k\o\1\x\0\a\e\2\s\r\i\7\1\2\z\y\u\c\2\6\x\g\x\f\v\e\p\h\a\9\l\t\t\1\i\w\e\5\r\0\0\r\1\j\d\7\e\1\w\7\f\b\6\z\k\d\2\q\f\p\o\h\k\8\n\w\g\7\3\2\c\e\b\a\d\h\j\9\w\9\e\h\i\q\c\z\j\3\9\s\m\9\e\z\v\z\2\8\k\6\i\k\2\o\9\d\b\m\m\2\m\g\h\4\n\s\n\0\j\d\i\8\k\0\w\i\7\e\0\r\b\f\q\z\0\f\5\d\p\w\6\v\a\n\p\8\w\r\9\x\0\y\6\5\m\k\q\v\j\p\v\f\0\c\0\p\8\r\k\o\f\d\o\m\2\h\1\m\5\f\r\k\b\r\4\u\b\z\8\d\o\q\k\t\o\8\c\x\a\w\s\u\x\a\8\k\f\l\k\4\r\z\i\v\a\b\8\t\a\l\k\c\o\s\n\g\w\5\c\o\z\x\d\p\y\4\9\r\u\5\z\a\8\y\o\z\o\b\4\y\7\g\7\z\5\f\f\j\d\a\o\v\i\6\0\e\w\8\x\z\2\w\g\r\r\5\c\4\e\k\o\w\k\k\b\j\k\q\2\g\q\a\l\r\7\i\9\x\p\i\c\q\2\s\u\a\f\w\9\i\h\z\q\4\y\r\d\0\j\p\c\u\i\2\i\m\o\f\b\y\b\7\i\3\3\h\y\q\y\n\j\z\2\5\3\e\m\v\f\p\w\0\3\2\q\n\p\k\6\1\v\3\u\a\r\u\3\8\x\b\n\y\s\v\l\b\r\n\h\y\1\p\4\5\l\y\a\p\c\v\n\g\d\5\0\0\z\o\7\3\0\l\q\8\n\c\3\4\j\o\n\m\c\i\f\c\k\e\m\v\k\e\w\c\e\r\x\x\7\3\w\x\x\4\q\2\n\r\c\k\k\u\h\h\w\b\r\7\0\o\3\i\d\l\1\0\r\g\j\b\9\a\u\k\u\0\q\k\s\v\2\1\0\2\v\e\p\m\u\c\i\v\0\d\j\9\z\x\e\4\z\i\l\h\v\q\c\q\z\0\v\u\f\o\0\i\s\x\1\j\v\4\w\u\l\6\t\l\8\k\k\0\h\d\p\a\q\a\n\l\2\6\x\6\6\7\x\a\z\p\u\p\e\w\x\7\w\p\e\7\l\f\j\r\k\h\a\6\g\6\w\b\0\7\t\r\f\d\3\y\b\e\t\3\9\b\e\y\8\g\n\o\p\u\g\i\q\l\a\l\d\x\i\4\d\l\2\p\9\5\v\o\h\w\k\8\y\a\0\u\j\8\u\2\7\2\b\7\s\m\x\o\b\0\4\d\8\2\i\m\h\r\9\j\m\j\d\3\d\y\6\y\q\h\7\h\7\l\d\o\i\l\a\l\d\r\x\y\o\3\r\2\f\a\9\y\d\3\p\j\w\s\h\m\p\4\8\r\3\n\5\2\t\a\3\n\v\v\6\v\1\k\x\u\8\k\3\g\o\a\7\4\6\v\m\h\x\t\c\2\w\v\h\o\p\t\6\8\j\o\q\u\k\i\i\i\x\q\r\r\0\n\q\t\h\u\y\m\w\3\4\f\7\m\i\o\b\8\5\6\q\1\3\j\e\1\8\6\0\v\a\u\u\m\k\u\p\n\8\n\6\c\t\x\g\d\h\d\0\3\j\e\t\o\f\8\e\7\b\l\7\1\i\l\m\v\6\g\b\v\q\n\q\w\e\e\p\l\g\v\w\t\w\b\r\e\y\1\7\6\g\6\n\5\7\9\8\q\z\t\x\u\6\u\t\p\i\m\d\h\1\5\c\4\6\b\b\5\1\5\n\h\8\e\a\1\e\3\3\j\2\u\t\k\9\o\j\i\w\m\2\o\1\2\2\3\1\3\f\n\n\w\u\u\u\j\q\5\6\y\8\y\h\6\1\o\q\t\l\w\t\j\0\b\t\8\5\u\y\l\u\w\i\6\3\q\h\h\o\o\2\m\0\t\f\e\b\9\m\5\w\u\m\t\7\g\h\m\s\v\f\o\6\0\w\n\t\8\z\a\2\a\9\u\z\n\7\7\z\j\b\u\2\j\w\g\3\x\j\r\u\o\n\i\d\4\t\t\2\p\5\v\w\h\j\w\j\0\e\8\k\5\0\t\q\w\b\b\6\u\q\v\3\c\n\2\1\p\l\o\p\5\8\0\3\p\0\m\z\3\6\q\r\e\h\r\c\o\c\o\h\g\o\l\c\w\m\2\n\x\s\k\0\r\c\2\g\l\h\w\n\z\3\i\o\d\z\s\x\9\1\w\w\l\f\n\9\h\7\f\n\6\g\2\6\z\4\o\r\4\3\1\9\t\q\b\q\s\h\1\o\d\s\j\e\0\z\f\t\g\u\7\7\x\d\8\h\g\j\a\o\9\2\x\e\v\7\f\u\c\g\q\9\d\8\h\m\7\k\1\2\r\4\b\b\h\t\2\s\g\c\j\p\0\a\o\g\p\w\f\i\q\w\u\f\p\e\q\e\a\1\i\f\3\9\9\v\x\l\9\x\w\3\7\f\x\8\w\y\9\4\i\3\n\r\s\z\w\h\w\8\j\q\w\i\5\9\h\i\i\i\j\3\3\a\5\0\5\g\d\g\d\p\z\k\y\l\9\j\3\b\t\h\q\x\4\r\u\z\v\7\j\g\t\d\n\b\i\8\r\k\p\z\c\p\i\p\y\o\a\0\e\a\a\m\b\o\e\r\l\z\i\9\2\4\7\d\9\4\e\t\w\8\j\6\n\o\k\8\8\d\6\1\p\3\c\k\g\0\7\a\i\l\3\s\5\p\f\e\r\m\k\m\2\1\q\5\1\f\d\r\r\g\m\e\g\u\5\4\r\a\9\9\p\6\r\x\f\q\0\q\9\w\g\g\t\f\l\b\x\i\9\7\7\l\x\q\9\d\b\n\t\8\l\k\8\l\l\2\n\o\9\k\e\q\2\z\0\s\5\h\3\j\b\r\6\s\9\w\8\0\z\4\8\u\v\a\j\q\8\3\5\g\r\n\v\j\c\7\g\k\k\w\6\k\y\s\w\5\d\7\4\u\2\7\3\p\w\h\j\z\k\5\u\u\d\h\p\q\a\4\n\3\4\8\w\f\d\y\5\9\1\h\7\3\d\7\1\k\l\t\3\z\q\z\3\s\k\u\g\h\8\7\m\y\l\i\w\6\d\d\j\v\g\o\1\9\i\6\6\m\b\n\8\s\s\y\v\q\i\6\d\3\4\0\i\v\e\f\x\h\6\3\2\3\u\6\l\b\f\l\v\x\9\0\f\n\p\p\4\b\9\c\c\m\j\v\k\y\g\y\z\q\q\7\k\9\z\7\5\g\p\q\e\0\z\3\k\2\6\c\b\c\5\4\w\k\i\f\r\3\v\x\f\9\b\3\z\l\p\e\o\5\b\0\g\p\8\r\7\g\t\4\2\u\0\2\s\0\4\7\5\k\i\g\i\t\k\n\5\q\g\x\6\8\9\d\4\e\8\k\s\f\h\0\g\2\p\u\j\q\a\3\1\7\6\e\z\q\0\s\y\x\k\u\5\4\u\7\0\x\a\4\f\w\v\8\g\6\a\g\g\8\s\m\7\k\o\r\7\p\2\g\k\v\c\w\b\a\3\2\z\2\8\o\5\6\t\4\t\h\y\m\c\o\2\d\7\h\h\9\z\p\i\0\6\0\g\e\9\n\2\d\3\u\o\z\7\s\f\t\b\4\l\s\s\e\p\q\c\c\4\j\p\x\0\g\p\s\e\e\q\0\h\u\e\6\o\5\o\j\r\u\8\9\f\m\f\f\3\c\1\l\h\j\1\a\8\s\3\l\e\x\a\0\n\q\j\h\q\w\8\m\9\3\b\t\5\e\l\3\r\v\1\i\l\r\5\2\7\z\h\4\h\1\0\k\4\e\u\e\j\9\j\w\0\f\o\9\c\b\c\l\v\b\n\d\7\g\x\2\c\1\w\v\r\5\f\0\0\b\2\d\r\8\f\p\g\p\7\x\q\v\5\r\o\d\w\o\k\o\u\6\4\r\k\w\d\3\8\k\0\2\r\0\7\b\y\p\o\o\3\a\i\4\x\g\u\w\z\g\g\d\s\4\q\o\q\r\l\7\j\2\x\w\x\d\5\4\c\3\0\e\w\j\d\5\t\e\4\8\b\g\3\h\2\s\u\l\8\3\f\z\5\x\u\a\q\z\h\d\d\z\r\m\t\4\5\2\k\h\v\m\9\n\s\u\u\o\p\y\c\2\v\2\e\o\w\8\d\d\k\o\u\p\f\f\k\7\t\o\b\j\o\k\m\b\n\v\p\f\2\w\0\3\l\t\d\a\7\m\8\d\1\1\a\g\7\e\2\o\b\2\6\a\y\g\z\p\d\m\2\0\c\f\3\x\1\0\6\z\4\n\l\u\b\x\1\n\n\n\4\9\o\p\a\0\t\l\m\9\u\6\w\7\2\y\9\4\l\f\j\0\7\0\3\f\a\3\r\z\5\g\r\9\2\1\m\r\v\x\x\r\5\2\b\t\t\e\q\f\4\9\t\5\n\y\x\r\6\7\5\p\a\0\q\o\n\o\m\o\q\s\r\a\k\l\o\y\1\g\x\z\h\v\s\6\7\o\d\h\t\d\4\s\j\w\w\z\y\h\y\6\1\4\n\9\1\k\p\y\1\8\4\d\3\j\2\b\u\s\x\h\c\u\y\n\v\c\z\x\o\u\w\3\c\k\9\m\e\w\a\f\j\o\i\n\t\l\i\r\s\o\c\7\m\3\i\x\s\h\h\q\z\3\8\0\b\p\l\m\a\n\g\p\r\k\u\r\r\2\w\r\8\d\r\l\g\d\j\u\6\j\s\4\p\r\o\g\q\1\g\h\7\m\r\v\n\t\y\k\x\4\u\u\c\x\3\r\6\6\z\g\m\g\2\f\e\c\f\z\g\f\j\z\8\g\a\x\r\5\i\b\e\f\u\k\i\9\4\c\z\u\i\q\8\y\h\f\o\f\6\g\k\w\o\7\j\7\b\y\2\n\i\3\7\0\9\y\i\9\8\t\p\i\m\p\8\v\b\v\h\k\e\v\v\w\v\0\9\v\t\c\d\z\5\g\o\d\4\b\5\8\2\c\x\9\n\l\v\c\2\x\c\f\2\7\h\s\b\g\g\c\9\8\4\h\g\3\7\r\v\b\q\t\u\v\k\g\4\s\d\9\e\k\a\l\m\m\c\z\l\u\j\p\p\b\z\x\d\v\s\1\s\a\1\p\t\l\d\h\7\9\e\l\h\8\t\j\t\q\1\6\6\t\f\l\t\f\x\g\9\u\c\k\n\2\5\q\c\k\0\g\7\m\j\8\l\w\r\6\x\n\d\5\z\o\l\u\1\f\x\h\b\5\p\2\p\p\5\y\8\0\1\j\m\g\i\q\1\u\t\4\p\4\5\c\9\2\j\p\r\y\c\1\p\8\0\p\p\5\l\x\5\r\c\n\b\f\i\q\r\t\v\q\7\i\0\h\o\v\0\c\9\q\u\b\o\d\9\d\k\x\u\8\b\v\v\9\z\7\p\x\8\n\1\z\v\w\r\2\0\t\7\g\0\w\u\9\d\l\t\i\w\f\f\4\k\c\r\4\f\q\5\7\6\j\p\u\p\o\l\e\j\g\j\i\d\w\y\o\i\n\d\q\c\e\p\c\8\5\x\i\e\j\8\u\f\1\2\x\i\u\j\i\j\z\e\0\s\a\g\n\y\9\s\r\x\e\2\k\q\y\2\b\4\w\q\x\2\t\2\m\s\8\x\5\b\o\m\x\o\8\e\m\z\o\f\c\z\2\a\b\j\5\b\y\2\v\1\6\e\s\8\i\g\a\l\z\o\7\q\o\0\1\w\k\c\1\t\b\w\w\f\h\i\z\4\h\o\x\b\2\j\t\f\h\5\6\w\6\3\0\y\0\5\5\5\6\m\p\j\r\6\4\0\2\e\l\a\s\g\1\l\4\2\n\8\4\1\7\8\z\d\i\e\u\8\q\e\e\d\t\c\t\e\i\o\n\c\m\p\p\3\4\3\r\x\j\p\t\m\5\h\l\e\5\b\h\3\k\m\x\l\e\6\j\r\o\3\k\v\v\1\q\s\5\n\l\8\z\k\y\9\i\i\4\v\2\p\7\u\y\w\k\h\i\8\x\n\3\e\c\8\u\a\l\x\2\y\9\8\d\e\9\i\z\2\7\6\6\b\t\g\v\w\7\r\6\w\g\u\9\q\c\0\j\e\d\b\7\2\2\j\t\2\p\3\r\h\z\d\k\1\6\k\t\y\3\8\l\v\a\j\f\t\d\m\n\t\7\q\g\9\w\a\i\o\q\f\p\j\f\c\3\e\b\x\h\4\q\s\u\y\0\o\4\9\n\7\9\m\3\4\c\x\s\0\5\h\1\t\j\i\f\q\e\i\7\s\f\h\7\i\o\k\v\8\q\c\o\5\9\q\v\s\z\w\w\0\s\r\w\t\t\p\d\e\r ]] 00:07:18.443 00:07:18.443 real 0m1.351s 00:07:18.443 user 0m0.921s 00:07:18.443 sys 0m0.619s 00:07:18.443 10:30:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.443 10:30:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:18.443 10:30:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:18.443 10:30:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:18.443 10:30:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:18.443 10:30:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:18.443 10:30:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:18.443 10:30:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:18.443 10:30:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:18.443 10:30:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:18.443 10:30:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:18.443 10:30:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:18.443 10:30:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.443 [2024-11-15 10:30:43.845473] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:18.443 [2024-11-15 10:30:43.845751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60121 ] 00:07:18.443 { 00:07:18.443 "subsystems": [ 00:07:18.443 { 00:07:18.443 "subsystem": "bdev", 00:07:18.443 "config": [ 00:07:18.443 { 00:07:18.443 "params": { 00:07:18.443 "trtype": "pcie", 00:07:18.443 "traddr": "0000:00:10.0", 00:07:18.443 "name": "Nvme0" 00:07:18.443 }, 00:07:18.443 "method": "bdev_nvme_attach_controller" 00:07:18.443 }, 00:07:18.443 { 00:07:18.443 "method": "bdev_wait_for_examine" 00:07:18.443 } 00:07:18.443 ] 00:07:18.443 } 00:07:18.443 ] 00:07:18.443 } 00:07:18.700 [2024-11-15 10:30:43.992088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.700 [2024-11-15 10:30:44.056074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.700 [2024-11-15 10:30:44.111942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.053  [2024-11-15T10:30:44.551Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:19.053 00:07:19.053 10:30:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.053 ************************************ 00:07:19.053 END TEST spdk_dd_basic_rw 00:07:19.053 ************************************ 00:07:19.053 00:07:19.053 real 0m18.464s 00:07:19.053 user 0m13.285s 00:07:19.053 sys 0m6.849s 00:07:19.053 10:30:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:19.053 10:30:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.053 10:30:44 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:19.053 10:30:44 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:19.053 10:30:44 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:19.053 10:30:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:19.053 ************************************ 00:07:19.053 START TEST spdk_dd_posix 00:07:19.053 ************************************ 00:07:19.053 10:30:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:19.343 * Looking for test storage... 00:07:19.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:19.343 10:30:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:19.343 10:30:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:07:19.343 10:30:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:19.343 10:30:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:19.343 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.343 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.343 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.343 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.343 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.343 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.343 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:19.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.344 --rc genhtml_branch_coverage=1 00:07:19.344 --rc genhtml_function_coverage=1 00:07:19.344 --rc genhtml_legend=1 00:07:19.344 --rc geninfo_all_blocks=1 00:07:19.344 --rc geninfo_unexecuted_blocks=1 00:07:19.344 00:07:19.344 ' 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:19.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.344 --rc genhtml_branch_coverage=1 00:07:19.344 --rc genhtml_function_coverage=1 00:07:19.344 --rc genhtml_legend=1 00:07:19.344 --rc geninfo_all_blocks=1 00:07:19.344 --rc geninfo_unexecuted_blocks=1 00:07:19.344 00:07:19.344 ' 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:19.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.344 --rc genhtml_branch_coverage=1 00:07:19.344 --rc genhtml_function_coverage=1 00:07:19.344 --rc genhtml_legend=1 00:07:19.344 --rc geninfo_all_blocks=1 00:07:19.344 --rc geninfo_unexecuted_blocks=1 00:07:19.344 00:07:19.344 ' 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:19.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.344 --rc genhtml_branch_coverage=1 00:07:19.344 --rc genhtml_function_coverage=1 00:07:19.344 --rc genhtml_legend=1 00:07:19.344 --rc geninfo_all_blocks=1 00:07:19.344 --rc geninfo_unexecuted_blocks=1 00:07:19.344 00:07:19.344 ' 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:19.344 * First test run, liburing in use 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:19.344 ************************************ 00:07:19.344 START TEST dd_flag_append 00:07:19.344 ************************************ 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1127 -- # append 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=iabptekizqrrpohvogrdgbelcpm2122z 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=q3wlyit07mkuwi8mta5hgnklpvx5jwlv 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s iabptekizqrrpohvogrdgbelcpm2122z 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s q3wlyit07mkuwi8mta5hgnklpvx5jwlv 00:07:19.344 10:30:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:19.344 [2024-11-15 10:30:44.759119] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:19.344 [2024-11-15 10:30:44.759235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60193 ] 00:07:19.602 [2024-11-15 10:30:44.908470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.602 [2024-11-15 10:30:44.974896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.602 [2024-11-15 10:30:45.031550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.602  [2024-11-15T10:30:45.358Z] Copying: 32/32 [B] (average 31 kBps) 00:07:19.860 00:07:19.860 ************************************ 00:07:19.860 END TEST dd_flag_append 00:07:19.860 ************************************ 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ q3wlyit07mkuwi8mta5hgnklpvx5jwlviabptekizqrrpohvogrdgbelcpm2122z == \q\3\w\l\y\i\t\0\7\m\k\u\w\i\8\m\t\a\5\h\g\n\k\l\p\v\x\5\j\w\l\v\i\a\b\p\t\e\k\i\z\q\r\r\p\o\h\v\o\g\r\d\g\b\e\l\c\p\m\2\1\2\2\z ]] 00:07:19.860 00:07:19.860 real 0m0.572s 00:07:19.860 user 0m0.320s 00:07:19.860 sys 0m0.274s 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:19.860 ************************************ 00:07:19.860 START TEST dd_flag_directory 00:07:19.860 ************************************ 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1127 -- # directory 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.860 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:20.118 [2024-11-15 10:30:45.371628] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:20.118 [2024-11-15 10:30:45.371752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60221 ] 00:07:20.118 [2024-11-15 10:30:45.520790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.118 [2024-11-15 10:30:45.585803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.376 [2024-11-15 10:30:45.640720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.376 [2024-11-15 10:30:45.680577] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:20.376 [2024-11-15 10:30:45.680848] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:20.376 [2024-11-15 10:30:45.680877] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.376 [2024-11-15 10:30:45.808340] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.635 10:30:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:20.635 [2024-11-15 10:30:45.931177] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:20.635 [2024-11-15 10:30:45.931504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60231 ] 00:07:20.635 [2024-11-15 10:30:46.073968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.894 [2024-11-15 10:30:46.140934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.894 [2024-11-15 10:30:46.196276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.894 [2024-11-15 10:30:46.235927] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:20.894 [2024-11-15 10:30:46.236183] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:20.894 [2024-11-15 10:30:46.236211] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.894 [2024-11-15 10:30:46.356279] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.152 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.153 00:07:21.153 real 0m1.112s 00:07:21.153 user 0m0.613s 00:07:21.153 sys 0m0.288s 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:21.153 ************************************ 00:07:21.153 END TEST dd_flag_directory 00:07:21.153 ************************************ 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:21.153 ************************************ 00:07:21.153 START TEST dd_flag_nofollow 00:07:21.153 ************************************ 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1127 -- # nofollow 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.153 10:30:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.153 [2024-11-15 10:30:46.544938] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:21.153 [2024-11-15 10:30:46.545193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60259 ] 00:07:21.412 [2024-11-15 10:30:46.697252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.412 [2024-11-15 10:30:46.766722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.412 [2024-11-15 10:30:46.824578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.412 [2024-11-15 10:30:46.865383] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:21.412 [2024-11-15 10:30:46.865655] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:21.412 [2024-11-15 10:30:46.865683] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.671 [2024-11-15 10:30:46.986306] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.671 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:21.671 [2024-11-15 10:30:47.119155] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:21.671 [2024-11-15 10:30:47.119487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60274 ] 00:07:21.929 [2024-11-15 10:30:47.268646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.929 [2024-11-15 10:30:47.333797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.929 [2024-11-15 10:30:47.390913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.188 [2024-11-15 10:30:47.432454] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:22.188 [2024-11-15 10:30:47.432530] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:22.188 [2024-11-15 10:30:47.432553] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.188 [2024-11-15 10:30:47.554321] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:22.188 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:22.188 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.188 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:22.188 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:22.188 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:22.188 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.188 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:22.188 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:22.188 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:22.188 10:30:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.447 [2024-11-15 10:30:47.688052] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:22.447 [2024-11-15 10:30:47.688163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60282 ] 00:07:22.447 [2024-11-15 10:30:47.835524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.447 [2024-11-15 10:30:47.898239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.706 [2024-11-15 10:30:47.952298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.706  [2024-11-15T10:30:48.204Z] Copying: 512/512 [B] (average 500 kBps) 00:07:22.706 00:07:22.706 ************************************ 00:07:22.706 END TEST dd_flag_nofollow 00:07:22.706 ************************************ 00:07:22.706 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ x4p5tjhxvbd7nf30svjuthyt9o9895h019yyjq8gqpeht5c0kuicmqfe4mo369tbj5njf1eryzkspb0sqhxsxdvfu57s6n09oap6l7yq89jdtsk1fzeqtm4sxk0vhfdhfojp7lfzlll05zdxson8oxtrrgav4vml7ti89hpqieg5dhx3bgvj1tr31wgim3mubb1umm5kwb1wbamf947o78qa72hoswq29kaefyqlv4uidywqegcvgxhqv5oj1zx0paimr864gmu3rbio0lqgqiubem28k4aimncf35nx346s0uvhw9s7qjpc2qkbvdcbkudcajlra6pm1b46lakd1s1bpxlfp9elmbfkrz3yobr17ncc2oj54gx8h73v9umlx59r87hoicu1akuletpr5rxv4cw79acx3ozef9e9rgylry58rvaogfrok9bfgrwep355n5kd4l65im8muzvdwx1cto7zi6xk4gurw08sw6xplw7bvv4dbrv2rd95sfv7 == \x\4\p\5\t\j\h\x\v\b\d\7\n\f\3\0\s\v\j\u\t\h\y\t\9\o\9\8\9\5\h\0\1\9\y\y\j\q\8\g\q\p\e\h\t\5\c\0\k\u\i\c\m\q\f\e\4\m\o\3\6\9\t\b\j\5\n\j\f\1\e\r\y\z\k\s\p\b\0\s\q\h\x\s\x\d\v\f\u\5\7\s\6\n\0\9\o\a\p\6\l\7\y\q\8\9\j\d\t\s\k\1\f\z\e\q\t\m\4\s\x\k\0\v\h\f\d\h\f\o\j\p\7\l\f\z\l\l\l\0\5\z\d\x\s\o\n\8\o\x\t\r\r\g\a\v\4\v\m\l\7\t\i\8\9\h\p\q\i\e\g\5\d\h\x\3\b\g\v\j\1\t\r\3\1\w\g\i\m\3\m\u\b\b\1\u\m\m\5\k\w\b\1\w\b\a\m\f\9\4\7\o\7\8\q\a\7\2\h\o\s\w\q\2\9\k\a\e\f\y\q\l\v\4\u\i\d\y\w\q\e\g\c\v\g\x\h\q\v\5\o\j\1\z\x\0\p\a\i\m\r\8\6\4\g\m\u\3\r\b\i\o\0\l\q\g\q\i\u\b\e\m\2\8\k\4\a\i\m\n\c\f\3\5\n\x\3\4\6\s\0\u\v\h\w\9\s\7\q\j\p\c\2\q\k\b\v\d\c\b\k\u\d\c\a\j\l\r\a\6\p\m\1\b\4\6\l\a\k\d\1\s\1\b\p\x\l\f\p\9\e\l\m\b\f\k\r\z\3\y\o\b\r\1\7\n\c\c\2\o\j\5\4\g\x\8\h\7\3\v\9\u\m\l\x\5\9\r\8\7\h\o\i\c\u\1\a\k\u\l\e\t\p\r\5\r\x\v\4\c\w\7\9\a\c\x\3\o\z\e\f\9\e\9\r\g\y\l\r\y\5\8\r\v\a\o\g\f\r\o\k\9\b\f\g\r\w\e\p\3\5\5\n\5\k\d\4\l\6\5\i\m\8\m\u\z\v\d\w\x\1\c\t\o\7\z\i\6\x\k\4\g\u\r\w\0\8\s\w\6\x\p\l\w\7\b\v\v\4\d\b\r\v\2\r\d\9\5\s\f\v\7 ]] 00:07:22.706 00:07:22.706 real 0m1.709s 00:07:22.706 user 0m0.962s 00:07:22.706 sys 0m0.552s 00:07:22.706 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.706 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:22.964 ************************************ 00:07:22.964 START TEST dd_flag_noatime 00:07:22.964 ************************************ 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1127 -- # noatime 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731666647 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731666648 00:07:22.964 10:30:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:23.898 10:30:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:23.899 [2024-11-15 10:30:49.317142] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:23.899 [2024-11-15 10:30:49.317255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60324 ] 00:07:24.156 [2024-11-15 10:30:49.478705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.156 [2024-11-15 10:30:49.548826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.156 [2024-11-15 10:30:49.607240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.156  [2024-11-15T10:30:49.913Z] Copying: 512/512 [B] (average 500 kBps) 00:07:24.416 00:07:24.416 10:30:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.416 10:30:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731666647 )) 00:07:24.416 10:30:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.416 10:30:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731666648 )) 00:07:24.416 10:30:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.416 [2024-11-15 10:30:49.909492] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:24.416 [2024-11-15 10:30:49.909630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60339 ] 00:07:24.716 [2024-11-15 10:30:50.059644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.716 [2024-11-15 10:30:50.131860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.716 [2024-11-15 10:30:50.190006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.974  [2024-11-15T10:30:50.472Z] Copying: 512/512 [B] (average 500 kBps) 00:07:24.974 00:07:24.974 10:30:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.974 10:30:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731666650 )) 00:07:24.974 00:07:24.974 real 0m2.193s 00:07:24.974 user 0m0.647s 00:07:24.974 sys 0m0.603s 00:07:24.974 10:30:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:24.974 10:30:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:24.974 ************************************ 00:07:24.974 END TEST dd_flag_noatime 00:07:24.974 ************************************ 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:25.233 ************************************ 00:07:25.233 START TEST dd_flags_misc 00:07:25.233 ************************************ 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1127 -- # io 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:25.233 10:30:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:25.233 [2024-11-15 10:30:50.542898] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:25.233 [2024-11-15 10:30:50.543207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60368 ] 00:07:25.233 [2024-11-15 10:30:50.690916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.492 [2024-11-15 10:30:50.755805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.492 [2024-11-15 10:30:50.811774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.492  [2024-11-15T10:30:51.248Z] Copying: 512/512 [B] (average 500 kBps) 00:07:25.750 00:07:25.750 10:30:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rwh8x6grzcvrt6qa63d5a0my1ci47eygthcjsugjbbq8ta4z8uku43p6tg5g34cx9fs589x060iuvi6wz5yu0pokxvw8pq9044s801ps9pdf7czlbndmeul62baiqywh67u0473aqy3lkrf21e2jlnbllk3pn4kz4ab0vdbmcjzksa3yuuvgvav3e5qnhqje471zbh2d0nx64bgtk1urgoa0spwwp6z0n7d133yg4wkr1zda3wgzcbbo4bfovttxqwxvxtkvpkg5u085mmup7kkpa7mtntrwscetv3pu33u98baov04lc7hgbpn3o3zkaa7h9m64oc08w5hpjt88z76e52zqyklcdpfm7ebtjih8vv6d81rhlq4bbo9l1pvgtyconwf7m2h5p7cpncgln6h8ok8bo1iowndqnri9qtlxdiysdv8iikifjbpyyw10hd1357rxi7bah3xozm171f49ff9zc52kt9xcbf1m74m170mkd2n4yt83vnjt3x9s == \r\w\h\8\x\6\g\r\z\c\v\r\t\6\q\a\6\3\d\5\a\0\m\y\1\c\i\4\7\e\y\g\t\h\c\j\s\u\g\j\b\b\q\8\t\a\4\z\8\u\k\u\4\3\p\6\t\g\5\g\3\4\c\x\9\f\s\5\8\9\x\0\6\0\i\u\v\i\6\w\z\5\y\u\0\p\o\k\x\v\w\8\p\q\9\0\4\4\s\8\0\1\p\s\9\p\d\f\7\c\z\l\b\n\d\m\e\u\l\6\2\b\a\i\q\y\w\h\6\7\u\0\4\7\3\a\q\y\3\l\k\r\f\2\1\e\2\j\l\n\b\l\l\k\3\p\n\4\k\z\4\a\b\0\v\d\b\m\c\j\z\k\s\a\3\y\u\u\v\g\v\a\v\3\e\5\q\n\h\q\j\e\4\7\1\z\b\h\2\d\0\n\x\6\4\b\g\t\k\1\u\r\g\o\a\0\s\p\w\w\p\6\z\0\n\7\d\1\3\3\y\g\4\w\k\r\1\z\d\a\3\w\g\z\c\b\b\o\4\b\f\o\v\t\t\x\q\w\x\v\x\t\k\v\p\k\g\5\u\0\8\5\m\m\u\p\7\k\k\p\a\7\m\t\n\t\r\w\s\c\e\t\v\3\p\u\3\3\u\9\8\b\a\o\v\0\4\l\c\7\h\g\b\p\n\3\o\3\z\k\a\a\7\h\9\m\6\4\o\c\0\8\w\5\h\p\j\t\8\8\z\7\6\e\5\2\z\q\y\k\l\c\d\p\f\m\7\e\b\t\j\i\h\8\v\v\6\d\8\1\r\h\l\q\4\b\b\o\9\l\1\p\v\g\t\y\c\o\n\w\f\7\m\2\h\5\p\7\c\p\n\c\g\l\n\6\h\8\o\k\8\b\o\1\i\o\w\n\d\q\n\r\i\9\q\t\l\x\d\i\y\s\d\v\8\i\i\k\i\f\j\b\p\y\y\w\1\0\h\d\1\3\5\7\r\x\i\7\b\a\h\3\x\o\z\m\1\7\1\f\4\9\f\f\9\z\c\5\2\k\t\9\x\c\b\f\1\m\7\4\m\1\7\0\m\k\d\2\n\4\y\t\8\3\v\n\j\t\3\x\9\s ]] 00:07:25.750 10:30:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:25.750 10:30:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:25.750 [2024-11-15 10:30:51.108457] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:25.750 [2024-11-15 10:30:51.108949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60378 ] 00:07:26.008 [2024-11-15 10:30:51.267226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.008 [2024-11-15 10:30:51.333942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.008 [2024-11-15 10:30:51.389693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.008  [2024-11-15T10:30:51.765Z] Copying: 512/512 [B] (average 500 kBps) 00:07:26.267 00:07:26.267 10:30:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rwh8x6grzcvrt6qa63d5a0my1ci47eygthcjsugjbbq8ta4z8uku43p6tg5g34cx9fs589x060iuvi6wz5yu0pokxvw8pq9044s801ps9pdf7czlbndmeul62baiqywh67u0473aqy3lkrf21e2jlnbllk3pn4kz4ab0vdbmcjzksa3yuuvgvav3e5qnhqje471zbh2d0nx64bgtk1urgoa0spwwp6z0n7d133yg4wkr1zda3wgzcbbo4bfovttxqwxvxtkvpkg5u085mmup7kkpa7mtntrwscetv3pu33u98baov04lc7hgbpn3o3zkaa7h9m64oc08w5hpjt88z76e52zqyklcdpfm7ebtjih8vv6d81rhlq4bbo9l1pvgtyconwf7m2h5p7cpncgln6h8ok8bo1iowndqnri9qtlxdiysdv8iikifjbpyyw10hd1357rxi7bah3xozm171f49ff9zc52kt9xcbf1m74m170mkd2n4yt83vnjt3x9s == \r\w\h\8\x\6\g\r\z\c\v\r\t\6\q\a\6\3\d\5\a\0\m\y\1\c\i\4\7\e\y\g\t\h\c\j\s\u\g\j\b\b\q\8\t\a\4\z\8\u\k\u\4\3\p\6\t\g\5\g\3\4\c\x\9\f\s\5\8\9\x\0\6\0\i\u\v\i\6\w\z\5\y\u\0\p\o\k\x\v\w\8\p\q\9\0\4\4\s\8\0\1\p\s\9\p\d\f\7\c\z\l\b\n\d\m\e\u\l\6\2\b\a\i\q\y\w\h\6\7\u\0\4\7\3\a\q\y\3\l\k\r\f\2\1\e\2\j\l\n\b\l\l\k\3\p\n\4\k\z\4\a\b\0\v\d\b\m\c\j\z\k\s\a\3\y\u\u\v\g\v\a\v\3\e\5\q\n\h\q\j\e\4\7\1\z\b\h\2\d\0\n\x\6\4\b\g\t\k\1\u\r\g\o\a\0\s\p\w\w\p\6\z\0\n\7\d\1\3\3\y\g\4\w\k\r\1\z\d\a\3\w\g\z\c\b\b\o\4\b\f\o\v\t\t\x\q\w\x\v\x\t\k\v\p\k\g\5\u\0\8\5\m\m\u\p\7\k\k\p\a\7\m\t\n\t\r\w\s\c\e\t\v\3\p\u\3\3\u\9\8\b\a\o\v\0\4\l\c\7\h\g\b\p\n\3\o\3\z\k\a\a\7\h\9\m\6\4\o\c\0\8\w\5\h\p\j\t\8\8\z\7\6\e\5\2\z\q\y\k\l\c\d\p\f\m\7\e\b\t\j\i\h\8\v\v\6\d\8\1\r\h\l\q\4\b\b\o\9\l\1\p\v\g\t\y\c\o\n\w\f\7\m\2\h\5\p\7\c\p\n\c\g\l\n\6\h\8\o\k\8\b\o\1\i\o\w\n\d\q\n\r\i\9\q\t\l\x\d\i\y\s\d\v\8\i\i\k\i\f\j\b\p\y\y\w\1\0\h\d\1\3\5\7\r\x\i\7\b\a\h\3\x\o\z\m\1\7\1\f\4\9\f\f\9\z\c\5\2\k\t\9\x\c\b\f\1\m\7\4\m\1\7\0\m\k\d\2\n\4\y\t\8\3\v\n\j\t\3\x\9\s ]] 00:07:26.267 10:30:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:26.267 10:30:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:26.267 [2024-11-15 10:30:51.680096] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:26.267 [2024-11-15 10:30:51.680237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60387 ] 00:07:26.525 [2024-11-15 10:30:51.840858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.525 [2024-11-15 10:30:51.905885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.525 [2024-11-15 10:30:51.962145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.525  [2024-11-15T10:30:52.281Z] Copying: 512/512 [B] (average 166 kBps) 00:07:26.783 00:07:26.783 10:30:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rwh8x6grzcvrt6qa63d5a0my1ci47eygthcjsugjbbq8ta4z8uku43p6tg5g34cx9fs589x060iuvi6wz5yu0pokxvw8pq9044s801ps9pdf7czlbndmeul62baiqywh67u0473aqy3lkrf21e2jlnbllk3pn4kz4ab0vdbmcjzksa3yuuvgvav3e5qnhqje471zbh2d0nx64bgtk1urgoa0spwwp6z0n7d133yg4wkr1zda3wgzcbbo4bfovttxqwxvxtkvpkg5u085mmup7kkpa7mtntrwscetv3pu33u98baov04lc7hgbpn3o3zkaa7h9m64oc08w5hpjt88z76e52zqyklcdpfm7ebtjih8vv6d81rhlq4bbo9l1pvgtyconwf7m2h5p7cpncgln6h8ok8bo1iowndqnri9qtlxdiysdv8iikifjbpyyw10hd1357rxi7bah3xozm171f49ff9zc52kt9xcbf1m74m170mkd2n4yt83vnjt3x9s == \r\w\h\8\x\6\g\r\z\c\v\r\t\6\q\a\6\3\d\5\a\0\m\y\1\c\i\4\7\e\y\g\t\h\c\j\s\u\g\j\b\b\q\8\t\a\4\z\8\u\k\u\4\3\p\6\t\g\5\g\3\4\c\x\9\f\s\5\8\9\x\0\6\0\i\u\v\i\6\w\z\5\y\u\0\p\o\k\x\v\w\8\p\q\9\0\4\4\s\8\0\1\p\s\9\p\d\f\7\c\z\l\b\n\d\m\e\u\l\6\2\b\a\i\q\y\w\h\6\7\u\0\4\7\3\a\q\y\3\l\k\r\f\2\1\e\2\j\l\n\b\l\l\k\3\p\n\4\k\z\4\a\b\0\v\d\b\m\c\j\z\k\s\a\3\y\u\u\v\g\v\a\v\3\e\5\q\n\h\q\j\e\4\7\1\z\b\h\2\d\0\n\x\6\4\b\g\t\k\1\u\r\g\o\a\0\s\p\w\w\p\6\z\0\n\7\d\1\3\3\y\g\4\w\k\r\1\z\d\a\3\w\g\z\c\b\b\o\4\b\f\o\v\t\t\x\q\w\x\v\x\t\k\v\p\k\g\5\u\0\8\5\m\m\u\p\7\k\k\p\a\7\m\t\n\t\r\w\s\c\e\t\v\3\p\u\3\3\u\9\8\b\a\o\v\0\4\l\c\7\h\g\b\p\n\3\o\3\z\k\a\a\7\h\9\m\6\4\o\c\0\8\w\5\h\p\j\t\8\8\z\7\6\e\5\2\z\q\y\k\l\c\d\p\f\m\7\e\b\t\j\i\h\8\v\v\6\d\8\1\r\h\l\q\4\b\b\o\9\l\1\p\v\g\t\y\c\o\n\w\f\7\m\2\h\5\p\7\c\p\n\c\g\l\n\6\h\8\o\k\8\b\o\1\i\o\w\n\d\q\n\r\i\9\q\t\l\x\d\i\y\s\d\v\8\i\i\k\i\f\j\b\p\y\y\w\1\0\h\d\1\3\5\7\r\x\i\7\b\a\h\3\x\o\z\m\1\7\1\f\4\9\f\f\9\z\c\5\2\k\t\9\x\c\b\f\1\m\7\4\m\1\7\0\m\k\d\2\n\4\y\t\8\3\v\n\j\t\3\x\9\s ]] 00:07:26.783 10:30:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:26.783 10:30:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:26.783 [2024-11-15 10:30:52.255741] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:26.783 [2024-11-15 10:30:52.255879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60397 ] 00:07:27.042 [2024-11-15 10:30:52.414910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.042 [2024-11-15 10:30:52.479380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.042 [2024-11-15 10:30:52.535835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.302  [2024-11-15T10:30:52.800Z] Copying: 512/512 [B] (average 250 kBps) 00:07:27.302 00:07:27.303 10:30:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rwh8x6grzcvrt6qa63d5a0my1ci47eygthcjsugjbbq8ta4z8uku43p6tg5g34cx9fs589x060iuvi6wz5yu0pokxvw8pq9044s801ps9pdf7czlbndmeul62baiqywh67u0473aqy3lkrf21e2jlnbllk3pn4kz4ab0vdbmcjzksa3yuuvgvav3e5qnhqje471zbh2d0nx64bgtk1urgoa0spwwp6z0n7d133yg4wkr1zda3wgzcbbo4bfovttxqwxvxtkvpkg5u085mmup7kkpa7mtntrwscetv3pu33u98baov04lc7hgbpn3o3zkaa7h9m64oc08w5hpjt88z76e52zqyklcdpfm7ebtjih8vv6d81rhlq4bbo9l1pvgtyconwf7m2h5p7cpncgln6h8ok8bo1iowndqnri9qtlxdiysdv8iikifjbpyyw10hd1357rxi7bah3xozm171f49ff9zc52kt9xcbf1m74m170mkd2n4yt83vnjt3x9s == \r\w\h\8\x\6\g\r\z\c\v\r\t\6\q\a\6\3\d\5\a\0\m\y\1\c\i\4\7\e\y\g\t\h\c\j\s\u\g\j\b\b\q\8\t\a\4\z\8\u\k\u\4\3\p\6\t\g\5\g\3\4\c\x\9\f\s\5\8\9\x\0\6\0\i\u\v\i\6\w\z\5\y\u\0\p\o\k\x\v\w\8\p\q\9\0\4\4\s\8\0\1\p\s\9\p\d\f\7\c\z\l\b\n\d\m\e\u\l\6\2\b\a\i\q\y\w\h\6\7\u\0\4\7\3\a\q\y\3\l\k\r\f\2\1\e\2\j\l\n\b\l\l\k\3\p\n\4\k\z\4\a\b\0\v\d\b\m\c\j\z\k\s\a\3\y\u\u\v\g\v\a\v\3\e\5\q\n\h\q\j\e\4\7\1\z\b\h\2\d\0\n\x\6\4\b\g\t\k\1\u\r\g\o\a\0\s\p\w\w\p\6\z\0\n\7\d\1\3\3\y\g\4\w\k\r\1\z\d\a\3\w\g\z\c\b\b\o\4\b\f\o\v\t\t\x\q\w\x\v\x\t\k\v\p\k\g\5\u\0\8\5\m\m\u\p\7\k\k\p\a\7\m\t\n\t\r\w\s\c\e\t\v\3\p\u\3\3\u\9\8\b\a\o\v\0\4\l\c\7\h\g\b\p\n\3\o\3\z\k\a\a\7\h\9\m\6\4\o\c\0\8\w\5\h\p\j\t\8\8\z\7\6\e\5\2\z\q\y\k\l\c\d\p\f\m\7\e\b\t\j\i\h\8\v\v\6\d\8\1\r\h\l\q\4\b\b\o\9\l\1\p\v\g\t\y\c\o\n\w\f\7\m\2\h\5\p\7\c\p\n\c\g\l\n\6\h\8\o\k\8\b\o\1\i\o\w\n\d\q\n\r\i\9\q\t\l\x\d\i\y\s\d\v\8\i\i\k\i\f\j\b\p\y\y\w\1\0\h\d\1\3\5\7\r\x\i\7\b\a\h\3\x\o\z\m\1\7\1\f\4\9\f\f\9\z\c\5\2\k\t\9\x\c\b\f\1\m\7\4\m\1\7\0\m\k\d\2\n\4\y\t\8\3\v\n\j\t\3\x\9\s ]] 00:07:27.303 10:30:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:27.303 10:30:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:27.303 10:30:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:27.303 10:30:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:27.303 10:30:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:27.303 10:30:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:27.562 [2024-11-15 10:30:52.846805] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:27.562 [2024-11-15 10:30:52.846919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60412 ] 00:07:27.562 [2024-11-15 10:30:52.993945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.820 [2024-11-15 10:30:53.058935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.820 [2024-11-15 10:30:53.115530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.820  [2024-11-15T10:30:53.577Z] Copying: 512/512 [B] (average 500 kBps) 00:07:28.079 00:07:28.079 10:30:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ m43pzjtvaxyqnpz550mkphmpzs8skk7514lcps122fx41m1ixgyl7gjk5hd0svbf0gaj24hu1ekrwjjbgzckoqwxbqu0pvecjo6i7n0q5v8ehwgvspjr6odm5xpy6w008nbs20b6l74h2ju6xzt3vpkadvr4gb52lz7s97b95kxo9uerheii2j0fk1mjc4xvch23ef7wuyd30glzovftc70xjwpx0iog5oh27ko9e60ykoe4rlzqghkrqapa88239vpoc3xvvgrqwrrq6nhgmfgt1jb8bjfhx0htoufznhp6sh8tsif1wigizb4xanagn4phrgwa1q24s4lhw3wimuvy01i6hdy97xgqd96zpmr5osgjdwt7shkl0kf8be8ror5pyxtjr8daai3m3mboojnv36xtmijxqgzcnp8hhx292nuhd9wydfl8w6etc4nccqu4xzxg1npx9sgoz0zbh5gc3p0imo99j3p3uw1x0884gojurgooe9dniikjljqq == \m\4\3\p\z\j\t\v\a\x\y\q\n\p\z\5\5\0\m\k\p\h\m\p\z\s\8\s\k\k\7\5\1\4\l\c\p\s\1\2\2\f\x\4\1\m\1\i\x\g\y\l\7\g\j\k\5\h\d\0\s\v\b\f\0\g\a\j\2\4\h\u\1\e\k\r\w\j\j\b\g\z\c\k\o\q\w\x\b\q\u\0\p\v\e\c\j\o\6\i\7\n\0\q\5\v\8\e\h\w\g\v\s\p\j\r\6\o\d\m\5\x\p\y\6\w\0\0\8\n\b\s\2\0\b\6\l\7\4\h\2\j\u\6\x\z\t\3\v\p\k\a\d\v\r\4\g\b\5\2\l\z\7\s\9\7\b\9\5\k\x\o\9\u\e\r\h\e\i\i\2\j\0\f\k\1\m\j\c\4\x\v\c\h\2\3\e\f\7\w\u\y\d\3\0\g\l\z\o\v\f\t\c\7\0\x\j\w\p\x\0\i\o\g\5\o\h\2\7\k\o\9\e\6\0\y\k\o\e\4\r\l\z\q\g\h\k\r\q\a\p\a\8\8\2\3\9\v\p\o\c\3\x\v\v\g\r\q\w\r\r\q\6\n\h\g\m\f\g\t\1\j\b\8\b\j\f\h\x\0\h\t\o\u\f\z\n\h\p\6\s\h\8\t\s\i\f\1\w\i\g\i\z\b\4\x\a\n\a\g\n\4\p\h\r\g\w\a\1\q\2\4\s\4\l\h\w\3\w\i\m\u\v\y\0\1\i\6\h\d\y\9\7\x\g\q\d\9\6\z\p\m\r\5\o\s\g\j\d\w\t\7\s\h\k\l\0\k\f\8\b\e\8\r\o\r\5\p\y\x\t\j\r\8\d\a\a\i\3\m\3\m\b\o\o\j\n\v\3\6\x\t\m\i\j\x\q\g\z\c\n\p\8\h\h\x\2\9\2\n\u\h\d\9\w\y\d\f\l\8\w\6\e\t\c\4\n\c\c\q\u\4\x\z\x\g\1\n\p\x\9\s\g\o\z\0\z\b\h\5\g\c\3\p\0\i\m\o\9\9\j\3\p\3\u\w\1\x\0\8\8\4\g\o\j\u\r\g\o\o\e\9\d\n\i\i\k\j\l\j\q\q ]] 00:07:28.079 10:30:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:28.079 10:30:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:28.079 [2024-11-15 10:30:53.413299] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:28.079 [2024-11-15 10:30:53.413418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60421 ] 00:07:28.079 [2024-11-15 10:30:53.558179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.337 [2024-11-15 10:30:53.622530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.337 [2024-11-15 10:30:53.677689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.337  [2024-11-15T10:30:54.093Z] Copying: 512/512 [B] (average 500 kBps) 00:07:28.595 00:07:28.596 10:30:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ m43pzjtvaxyqnpz550mkphmpzs8skk7514lcps122fx41m1ixgyl7gjk5hd0svbf0gaj24hu1ekrwjjbgzckoqwxbqu0pvecjo6i7n0q5v8ehwgvspjr6odm5xpy6w008nbs20b6l74h2ju6xzt3vpkadvr4gb52lz7s97b95kxo9uerheii2j0fk1mjc4xvch23ef7wuyd30glzovftc70xjwpx0iog5oh27ko9e60ykoe4rlzqghkrqapa88239vpoc3xvvgrqwrrq6nhgmfgt1jb8bjfhx0htoufznhp6sh8tsif1wigizb4xanagn4phrgwa1q24s4lhw3wimuvy01i6hdy97xgqd96zpmr5osgjdwt7shkl0kf8be8ror5pyxtjr8daai3m3mboojnv36xtmijxqgzcnp8hhx292nuhd9wydfl8w6etc4nccqu4xzxg1npx9sgoz0zbh5gc3p0imo99j3p3uw1x0884gojurgooe9dniikjljqq == \m\4\3\p\z\j\t\v\a\x\y\q\n\p\z\5\5\0\m\k\p\h\m\p\z\s\8\s\k\k\7\5\1\4\l\c\p\s\1\2\2\f\x\4\1\m\1\i\x\g\y\l\7\g\j\k\5\h\d\0\s\v\b\f\0\g\a\j\2\4\h\u\1\e\k\r\w\j\j\b\g\z\c\k\o\q\w\x\b\q\u\0\p\v\e\c\j\o\6\i\7\n\0\q\5\v\8\e\h\w\g\v\s\p\j\r\6\o\d\m\5\x\p\y\6\w\0\0\8\n\b\s\2\0\b\6\l\7\4\h\2\j\u\6\x\z\t\3\v\p\k\a\d\v\r\4\g\b\5\2\l\z\7\s\9\7\b\9\5\k\x\o\9\u\e\r\h\e\i\i\2\j\0\f\k\1\m\j\c\4\x\v\c\h\2\3\e\f\7\w\u\y\d\3\0\g\l\z\o\v\f\t\c\7\0\x\j\w\p\x\0\i\o\g\5\o\h\2\7\k\o\9\e\6\0\y\k\o\e\4\r\l\z\q\g\h\k\r\q\a\p\a\8\8\2\3\9\v\p\o\c\3\x\v\v\g\r\q\w\r\r\q\6\n\h\g\m\f\g\t\1\j\b\8\b\j\f\h\x\0\h\t\o\u\f\z\n\h\p\6\s\h\8\t\s\i\f\1\w\i\g\i\z\b\4\x\a\n\a\g\n\4\p\h\r\g\w\a\1\q\2\4\s\4\l\h\w\3\w\i\m\u\v\y\0\1\i\6\h\d\y\9\7\x\g\q\d\9\6\z\p\m\r\5\o\s\g\j\d\w\t\7\s\h\k\l\0\k\f\8\b\e\8\r\o\r\5\p\y\x\t\j\r\8\d\a\a\i\3\m\3\m\b\o\o\j\n\v\3\6\x\t\m\i\j\x\q\g\z\c\n\p\8\h\h\x\2\9\2\n\u\h\d\9\w\y\d\f\l\8\w\6\e\t\c\4\n\c\c\q\u\4\x\z\x\g\1\n\p\x\9\s\g\o\z\0\z\b\h\5\g\c\3\p\0\i\m\o\9\9\j\3\p\3\u\w\1\x\0\8\8\4\g\o\j\u\r\g\o\o\e\9\d\n\i\i\k\j\l\j\q\q ]] 00:07:28.596 10:30:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:28.596 10:30:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:28.596 [2024-11-15 10:30:53.959705] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:28.596 [2024-11-15 10:30:53.959816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60431 ] 00:07:28.854 [2024-11-15 10:30:54.107734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.854 [2024-11-15 10:30:54.173111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.854 [2024-11-15 10:30:54.229893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.854  [2024-11-15T10:30:54.611Z] Copying: 512/512 [B] (average 250 kBps) 00:07:29.113 00:07:29.113 10:30:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ m43pzjtvaxyqnpz550mkphmpzs8skk7514lcps122fx41m1ixgyl7gjk5hd0svbf0gaj24hu1ekrwjjbgzckoqwxbqu0pvecjo6i7n0q5v8ehwgvspjr6odm5xpy6w008nbs20b6l74h2ju6xzt3vpkadvr4gb52lz7s97b95kxo9uerheii2j0fk1mjc4xvch23ef7wuyd30glzovftc70xjwpx0iog5oh27ko9e60ykoe4rlzqghkrqapa88239vpoc3xvvgrqwrrq6nhgmfgt1jb8bjfhx0htoufznhp6sh8tsif1wigizb4xanagn4phrgwa1q24s4lhw3wimuvy01i6hdy97xgqd96zpmr5osgjdwt7shkl0kf8be8ror5pyxtjr8daai3m3mboojnv36xtmijxqgzcnp8hhx292nuhd9wydfl8w6etc4nccqu4xzxg1npx9sgoz0zbh5gc3p0imo99j3p3uw1x0884gojurgooe9dniikjljqq == \m\4\3\p\z\j\t\v\a\x\y\q\n\p\z\5\5\0\m\k\p\h\m\p\z\s\8\s\k\k\7\5\1\4\l\c\p\s\1\2\2\f\x\4\1\m\1\i\x\g\y\l\7\g\j\k\5\h\d\0\s\v\b\f\0\g\a\j\2\4\h\u\1\e\k\r\w\j\j\b\g\z\c\k\o\q\w\x\b\q\u\0\p\v\e\c\j\o\6\i\7\n\0\q\5\v\8\e\h\w\g\v\s\p\j\r\6\o\d\m\5\x\p\y\6\w\0\0\8\n\b\s\2\0\b\6\l\7\4\h\2\j\u\6\x\z\t\3\v\p\k\a\d\v\r\4\g\b\5\2\l\z\7\s\9\7\b\9\5\k\x\o\9\u\e\r\h\e\i\i\2\j\0\f\k\1\m\j\c\4\x\v\c\h\2\3\e\f\7\w\u\y\d\3\0\g\l\z\o\v\f\t\c\7\0\x\j\w\p\x\0\i\o\g\5\o\h\2\7\k\o\9\e\6\0\y\k\o\e\4\r\l\z\q\g\h\k\r\q\a\p\a\8\8\2\3\9\v\p\o\c\3\x\v\v\g\r\q\w\r\r\q\6\n\h\g\m\f\g\t\1\j\b\8\b\j\f\h\x\0\h\t\o\u\f\z\n\h\p\6\s\h\8\t\s\i\f\1\w\i\g\i\z\b\4\x\a\n\a\g\n\4\p\h\r\g\w\a\1\q\2\4\s\4\l\h\w\3\w\i\m\u\v\y\0\1\i\6\h\d\y\9\7\x\g\q\d\9\6\z\p\m\r\5\o\s\g\j\d\w\t\7\s\h\k\l\0\k\f\8\b\e\8\r\o\r\5\p\y\x\t\j\r\8\d\a\a\i\3\m\3\m\b\o\o\j\n\v\3\6\x\t\m\i\j\x\q\g\z\c\n\p\8\h\h\x\2\9\2\n\u\h\d\9\w\y\d\f\l\8\w\6\e\t\c\4\n\c\c\q\u\4\x\z\x\g\1\n\p\x\9\s\g\o\z\0\z\b\h\5\g\c\3\p\0\i\m\o\9\9\j\3\p\3\u\w\1\x\0\8\8\4\g\o\j\u\r\g\o\o\e\9\d\n\i\i\k\j\l\j\q\q ]] 00:07:29.113 10:30:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:29.113 10:30:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:29.113 [2024-11-15 10:30:54.517468] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:29.113 [2024-11-15 10:30:54.517611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60440 ] 00:07:29.465 [2024-11-15 10:30:54.668015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.465 [2024-11-15 10:30:54.732637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.465 [2024-11-15 10:30:54.789008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.465  [2024-11-15T10:30:55.221Z] Copying: 512/512 [B] (average 166 kBps) 00:07:29.723 00:07:29.723 ************************************ 00:07:29.723 END TEST dd_flags_misc 00:07:29.723 ************************************ 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ m43pzjtvaxyqnpz550mkphmpzs8skk7514lcps122fx41m1ixgyl7gjk5hd0svbf0gaj24hu1ekrwjjbgzckoqwxbqu0pvecjo6i7n0q5v8ehwgvspjr6odm5xpy6w008nbs20b6l74h2ju6xzt3vpkadvr4gb52lz7s97b95kxo9uerheii2j0fk1mjc4xvch23ef7wuyd30glzovftc70xjwpx0iog5oh27ko9e60ykoe4rlzqghkrqapa88239vpoc3xvvgrqwrrq6nhgmfgt1jb8bjfhx0htoufznhp6sh8tsif1wigizb4xanagn4phrgwa1q24s4lhw3wimuvy01i6hdy97xgqd96zpmr5osgjdwt7shkl0kf8be8ror5pyxtjr8daai3m3mboojnv36xtmijxqgzcnp8hhx292nuhd9wydfl8w6etc4nccqu4xzxg1npx9sgoz0zbh5gc3p0imo99j3p3uw1x0884gojurgooe9dniikjljqq == \m\4\3\p\z\j\t\v\a\x\y\q\n\p\z\5\5\0\m\k\p\h\m\p\z\s\8\s\k\k\7\5\1\4\l\c\p\s\1\2\2\f\x\4\1\m\1\i\x\g\y\l\7\g\j\k\5\h\d\0\s\v\b\f\0\g\a\j\2\4\h\u\1\e\k\r\w\j\j\b\g\z\c\k\o\q\w\x\b\q\u\0\p\v\e\c\j\o\6\i\7\n\0\q\5\v\8\e\h\w\g\v\s\p\j\r\6\o\d\m\5\x\p\y\6\w\0\0\8\n\b\s\2\0\b\6\l\7\4\h\2\j\u\6\x\z\t\3\v\p\k\a\d\v\r\4\g\b\5\2\l\z\7\s\9\7\b\9\5\k\x\o\9\u\e\r\h\e\i\i\2\j\0\f\k\1\m\j\c\4\x\v\c\h\2\3\e\f\7\w\u\y\d\3\0\g\l\z\o\v\f\t\c\7\0\x\j\w\p\x\0\i\o\g\5\o\h\2\7\k\o\9\e\6\0\y\k\o\e\4\r\l\z\q\g\h\k\r\q\a\p\a\8\8\2\3\9\v\p\o\c\3\x\v\v\g\r\q\w\r\r\q\6\n\h\g\m\f\g\t\1\j\b\8\b\j\f\h\x\0\h\t\o\u\f\z\n\h\p\6\s\h\8\t\s\i\f\1\w\i\g\i\z\b\4\x\a\n\a\g\n\4\p\h\r\g\w\a\1\q\2\4\s\4\l\h\w\3\w\i\m\u\v\y\0\1\i\6\h\d\y\9\7\x\g\q\d\9\6\z\p\m\r\5\o\s\g\j\d\w\t\7\s\h\k\l\0\k\f\8\b\e\8\r\o\r\5\p\y\x\t\j\r\8\d\a\a\i\3\m\3\m\b\o\o\j\n\v\3\6\x\t\m\i\j\x\q\g\z\c\n\p\8\h\h\x\2\9\2\n\u\h\d\9\w\y\d\f\l\8\w\6\e\t\c\4\n\c\c\q\u\4\x\z\x\g\1\n\p\x\9\s\g\o\z\0\z\b\h\5\g\c\3\p\0\i\m\o\9\9\j\3\p\3\u\w\1\x\0\8\8\4\g\o\j\u\r\g\o\o\e\9\d\n\i\i\k\j\l\j\q\q ]] 00:07:29.723 00:07:29.723 real 0m4.546s 00:07:29.723 user 0m2.518s 00:07:29.723 sys 0m2.250s 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:29.723 * Second test run, disabling liburing, forcing AIO 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:29.723 ************************************ 00:07:29.723 START TEST dd_flag_append_forced_aio 00:07:29.723 ************************************ 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1127 -- # append 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=1vnrfpv9w9hjt4qn867grj1ynurk5b8x 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=m85mwupty1v3nrdjm3y7pzofat2k908w 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 1vnrfpv9w9hjt4qn867grj1ynurk5b8x 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s m85mwupty1v3nrdjm3y7pzofat2k908w 00:07:29.723 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:29.723 [2024-11-15 10:30:55.150620] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:29.723 [2024-11-15 10:30:55.150725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60469 ] 00:07:29.981 [2024-11-15 10:30:55.298418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.981 [2024-11-15 10:30:55.363892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.981 [2024-11-15 10:30:55.419822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.981  [2024-11-15T10:30:55.740Z] Copying: 32/32 [B] (average 31 kBps) 00:07:30.242 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ m85mwupty1v3nrdjm3y7pzofat2k908w1vnrfpv9w9hjt4qn867grj1ynurk5b8x == \m\8\5\m\w\u\p\t\y\1\v\3\n\r\d\j\m\3\y\7\p\z\o\f\a\t\2\k\9\0\8\w\1\v\n\r\f\p\v\9\w\9\h\j\t\4\q\n\8\6\7\g\r\j\1\y\n\u\r\k\5\b\8\x ]] 00:07:30.242 00:07:30.242 real 0m0.595s 00:07:30.242 user 0m0.322s 00:07:30.242 sys 0m0.150s 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:30.242 ************************************ 00:07:30.242 END TEST dd_flag_append_forced_aio 00:07:30.242 ************************************ 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:30.242 ************************************ 00:07:30.242 START TEST dd_flag_directory_forced_aio 00:07:30.242 ************************************ 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1127 -- # directory 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.242 10:30:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:30.500 [2024-11-15 10:30:55.790083] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:30.500 [2024-11-15 10:30:55.790200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60496 ] 00:07:30.500 [2024-11-15 10:30:55.939979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.758 [2024-11-15 10:30:56.003691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.758 [2024-11-15 10:30:56.059967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.758 [2024-11-15 10:30:56.100423] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:30.758 [2024-11-15 10:30:56.100495] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:30.758 [2024-11-15 10:30:56.100528] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.758 [2024-11-15 10:30:56.226644] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:31.017 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:31.017 [2024-11-15 10:30:56.362132] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:31.017 [2024-11-15 10:30:56.362246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60505 ] 00:07:31.017 [2024-11-15 10:30:56.512338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.275 [2024-11-15 10:30:56.576684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.275 [2024-11-15 10:30:56.633310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.275 [2024-11-15 10:30:56.673606] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:31.275 [2024-11-15 10:30:56.673676] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:31.275 [2024-11-15 10:30:56.673696] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.533 [2024-11-15 10:30:56.797707] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:31.533 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:31.533 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:31.533 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:31.533 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:31.533 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:31.533 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:31.533 00:07:31.533 real 0m1.136s 00:07:31.533 user 0m0.629s 00:07:31.533 sys 0m0.296s 00:07:31.533 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.533 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:31.533 ************************************ 00:07:31.533 END TEST dd_flag_directory_forced_aio 00:07:31.533 ************************************ 00:07:31.533 10:30:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:31.533 10:30:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:31.533 10:30:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.533 10:30:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:31.533 ************************************ 00:07:31.533 START TEST dd_flag_nofollow_forced_aio 00:07:31.533 ************************************ 00:07:31.533 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1127 -- # nofollow 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:31.534 10:30:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.534 [2024-11-15 10:30:56.987457] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:31.534 [2024-11-15 10:30:56.987587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60539 ] 00:07:31.791 [2024-11-15 10:30:57.132611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.791 [2024-11-15 10:30:57.197977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.791 [2024-11-15 10:30:57.254846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.049 [2024-11-15 10:30:57.295879] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:32.049 [2024-11-15 10:30:57.295937] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:32.049 [2024-11-15 10:30:57.295957] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.049 [2024-11-15 10:30:57.419370] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.049 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:32.050 10:30:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:32.307 [2024-11-15 10:30:57.548905] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:32.307 [2024-11-15 10:30:57.549042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60547 ] 00:07:32.307 [2024-11-15 10:30:57.696534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.307 [2024-11-15 10:30:57.760413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.565 [2024-11-15 10:30:57.818595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.565 [2024-11-15 10:30:57.859981] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:32.565 [2024-11-15 10:30:57.860034] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:32.565 [2024-11-15 10:30:57.860056] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.565 [2024-11-15 10:30:57.986459] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:32.565 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:32.565 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.565 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:32.565 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:32.565 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:32.565 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.565 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:32.565 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:32.565 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:32.823 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.823 [2024-11-15 10:30:58.125359] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:32.823 [2024-11-15 10:30:58.125486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60556 ] 00:07:32.823 [2024-11-15 10:30:58.272307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.081 [2024-11-15 10:30:58.337166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.081 [2024-11-15 10:30:58.391756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.081  [2024-11-15T10:30:58.839Z] Copying: 512/512 [B] (average 500 kBps) 00:07:33.341 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ m6my7pbmrmfqqded65igzb7q8akiyi8zwirr21ubshtg9qv0gmfdku6mibbwmdlrigch8o9loai7ykjiu6cofbi4y5sqafqvi9u3xcmlgxa981ci6wun4k6c4hsof2fxtdw43h38zeg0cjiq9at5m32m9l160suyx0m2vvyfvdw90g30b1r8m05u49pc7p4d773acyo3t3l6k5a8ueptxyg0p4cxtaj4dh4uhrr95e4udk6o7u0c6ouwrt8j0z5qg500n0id62gxms5c7idnnf0ry9agh40x6hvtg4m8zz4a1vntefe16g6tyvpxqtbj1ha5ui6cjkmyz0dmginmzd33wthqx64sam7e8p9mtdtjq85as1anwgx8vxjwwe4gddlg1s6l5795190dfsn0qeh5f8khnala7zyi3951iaax39kg41ojrlofkrl4e906yg8pfkagviaxotxxlwyszkf6pnode000pnpwvuedg7cpoujxw1xkf2g2qu2dxqt3 == \m\6\m\y\7\p\b\m\r\m\f\q\q\d\e\d\6\5\i\g\z\b\7\q\8\a\k\i\y\i\8\z\w\i\r\r\2\1\u\b\s\h\t\g\9\q\v\0\g\m\f\d\k\u\6\m\i\b\b\w\m\d\l\r\i\g\c\h\8\o\9\l\o\a\i\7\y\k\j\i\u\6\c\o\f\b\i\4\y\5\s\q\a\f\q\v\i\9\u\3\x\c\m\l\g\x\a\9\8\1\c\i\6\w\u\n\4\k\6\c\4\h\s\o\f\2\f\x\t\d\w\4\3\h\3\8\z\e\g\0\c\j\i\q\9\a\t\5\m\3\2\m\9\l\1\6\0\s\u\y\x\0\m\2\v\v\y\f\v\d\w\9\0\g\3\0\b\1\r\8\m\0\5\u\4\9\p\c\7\p\4\d\7\7\3\a\c\y\o\3\t\3\l\6\k\5\a\8\u\e\p\t\x\y\g\0\p\4\c\x\t\a\j\4\d\h\4\u\h\r\r\9\5\e\4\u\d\k\6\o\7\u\0\c\6\o\u\w\r\t\8\j\0\z\5\q\g\5\0\0\n\0\i\d\6\2\g\x\m\s\5\c\7\i\d\n\n\f\0\r\y\9\a\g\h\4\0\x\6\h\v\t\g\4\m\8\z\z\4\a\1\v\n\t\e\f\e\1\6\g\6\t\y\v\p\x\q\t\b\j\1\h\a\5\u\i\6\c\j\k\m\y\z\0\d\m\g\i\n\m\z\d\3\3\w\t\h\q\x\6\4\s\a\m\7\e\8\p\9\m\t\d\t\j\q\8\5\a\s\1\a\n\w\g\x\8\v\x\j\w\w\e\4\g\d\d\l\g\1\s\6\l\5\7\9\5\1\9\0\d\f\s\n\0\q\e\h\5\f\8\k\h\n\a\l\a\7\z\y\i\3\9\5\1\i\a\a\x\3\9\k\g\4\1\o\j\r\l\o\f\k\r\l\4\e\9\0\6\y\g\8\p\f\k\a\g\v\i\a\x\o\t\x\x\l\w\y\s\z\k\f\6\p\n\o\d\e\0\0\0\p\n\p\w\v\u\e\d\g\7\c\p\o\u\j\x\w\1\x\k\f\2\g\2\q\u\2\d\x\q\t\3 ]] 00:07:33.341 00:07:33.341 real 0m1.715s 00:07:33.341 user 0m0.942s 00:07:33.341 sys 0m0.440s 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.341 ************************************ 00:07:33.341 END TEST dd_flag_nofollow_forced_aio 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:33.341 ************************************ 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:33.341 ************************************ 00:07:33.341 START TEST dd_flag_noatime_forced_aio 00:07:33.341 ************************************ 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1127 -- # noatime 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731666658 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731666658 00:07:33.341 10:30:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:34.275 10:30:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.534 [2024-11-15 10:30:59.776284] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:34.534 [2024-11-15 10:30:59.776402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60596 ] 00:07:34.534 [2024-11-15 10:30:59.926540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.534 [2024-11-15 10:30:59.996809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.793 [2024-11-15 10:31:00.056599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.793  [2024-11-15T10:31:00.560Z] Copying: 512/512 [B] (average 500 kBps) 00:07:35.062 00:07:35.062 10:31:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.062 10:31:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731666658 )) 00:07:35.062 10:31:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.062 10:31:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731666658 )) 00:07:35.062 10:31:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.062 [2024-11-15 10:31:00.391453] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:35.062 [2024-11-15 10:31:00.391578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60608 ] 00:07:35.062 [2024-11-15 10:31:00.539609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.320 [2024-11-15 10:31:00.603603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.320 [2024-11-15 10:31:00.659012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.320  [2024-11-15T10:31:01.075Z] Copying: 512/512 [B] (average 500 kBps) 00:07:35.577 00:07:35.577 10:31:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.577 10:31:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731666660 )) 00:07:35.577 00:07:35.577 real 0m2.229s 00:07:35.577 user 0m0.670s 00:07:35.577 sys 0m0.317s 00:07:35.577 10:31:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:35.577 10:31:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:35.577 ************************************ 00:07:35.577 END TEST dd_flag_noatime_forced_aio 00:07:35.577 ************************************ 00:07:35.577 10:31:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:35.577 10:31:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:35.577 10:31:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.577 10:31:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:35.577 ************************************ 00:07:35.577 START TEST dd_flags_misc_forced_aio 00:07:35.577 ************************************ 00:07:35.578 10:31:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1127 -- # io 00:07:35.578 10:31:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:35.578 10:31:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:35.578 10:31:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:35.578 10:31:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:35.578 10:31:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:35.578 10:31:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:35.578 10:31:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:35.578 10:31:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:35.578 10:31:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:35.578 [2024-11-15 10:31:01.052428] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:35.578 [2024-11-15 10:31:01.052620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60640 ] 00:07:35.836 [2024-11-15 10:31:01.206254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.836 [2024-11-15 10:31:01.270226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.836 [2024-11-15 10:31:01.327755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.095  [2024-11-15T10:31:01.593Z] Copying: 512/512 [B] (average 500 kBps) 00:07:36.095 00:07:36.095 10:31:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 18oc4nbwug537rmmtyzgbt7hmyt998pie3u5o5o558fjedo25qmm06ij4um4mieec8ipywumgzpjmc51g3dzp08u61k1qltcdopdxyoqmo4voslsacci0el374ndxm7er1c24jgjc2zax7tstrsw8zq3niv8q9737yugczzqpge478zdrryelh24r4zj38nq3baej90kjzqhq623wb0z7r8bwb9s4m0xl02h78di1k489znvdbrfjg0u7kz3hg2lmni8asd11tbl8z57hnn730lcpvgj7mbmjuicwp3wg0gb7rubehxsiyygfnwq982rn5srlzgdld8ciu4zoj9htqy4jxsk4y8mrkeva8jnsgl2j7anq9cbczjocqb8ig4gkanuen1wq1e195ay0xyyi9l1ufv6x282ut2s6zu7zawxbxbpzry1z068wmfw8g60v0rk6o5v9ce22ke3yyria9zga6ry32m0osmis5lfvcfdu93g75z4pmqcxksah9tc == \1\8\o\c\4\n\b\w\u\g\5\3\7\r\m\m\t\y\z\g\b\t\7\h\m\y\t\9\9\8\p\i\e\3\u\5\o\5\o\5\5\8\f\j\e\d\o\2\5\q\m\m\0\6\i\j\4\u\m\4\m\i\e\e\c\8\i\p\y\w\u\m\g\z\p\j\m\c\5\1\g\3\d\z\p\0\8\u\6\1\k\1\q\l\t\c\d\o\p\d\x\y\o\q\m\o\4\v\o\s\l\s\a\c\c\i\0\e\l\3\7\4\n\d\x\m\7\e\r\1\c\2\4\j\g\j\c\2\z\a\x\7\t\s\t\r\s\w\8\z\q\3\n\i\v\8\q\9\7\3\7\y\u\g\c\z\z\q\p\g\e\4\7\8\z\d\r\r\y\e\l\h\2\4\r\4\z\j\3\8\n\q\3\b\a\e\j\9\0\k\j\z\q\h\q\6\2\3\w\b\0\z\7\r\8\b\w\b\9\s\4\m\0\x\l\0\2\h\7\8\d\i\1\k\4\8\9\z\n\v\d\b\r\f\j\g\0\u\7\k\z\3\h\g\2\l\m\n\i\8\a\s\d\1\1\t\b\l\8\z\5\7\h\n\n\7\3\0\l\c\p\v\g\j\7\m\b\m\j\u\i\c\w\p\3\w\g\0\g\b\7\r\u\b\e\h\x\s\i\y\y\g\f\n\w\q\9\8\2\r\n\5\s\r\l\z\g\d\l\d\8\c\i\u\4\z\o\j\9\h\t\q\y\4\j\x\s\k\4\y\8\m\r\k\e\v\a\8\j\n\s\g\l\2\j\7\a\n\q\9\c\b\c\z\j\o\c\q\b\8\i\g\4\g\k\a\n\u\e\n\1\w\q\1\e\1\9\5\a\y\0\x\y\y\i\9\l\1\u\f\v\6\x\2\8\2\u\t\2\s\6\z\u\7\z\a\w\x\b\x\b\p\z\r\y\1\z\0\6\8\w\m\f\w\8\g\6\0\v\0\r\k\6\o\5\v\9\c\e\2\2\k\e\3\y\y\r\i\a\9\z\g\a\6\r\y\3\2\m\0\o\s\m\i\s\5\l\f\v\c\f\d\u\9\3\g\7\5\z\4\p\m\q\c\x\k\s\a\h\9\t\c ]] 00:07:36.095 10:31:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:36.095 10:31:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:36.353 [2024-11-15 10:31:01.636584] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:36.353 [2024-11-15 10:31:01.636686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60642 ] 00:07:36.353 [2024-11-15 10:31:01.784374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.611 [2024-11-15 10:31:01.848716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.611 [2024-11-15 10:31:01.907080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.611  [2024-11-15T10:31:02.367Z] Copying: 512/512 [B] (average 500 kBps) 00:07:36.869 00:07:36.870 10:31:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 18oc4nbwug537rmmtyzgbt7hmyt998pie3u5o5o558fjedo25qmm06ij4um4mieec8ipywumgzpjmc51g3dzp08u61k1qltcdopdxyoqmo4voslsacci0el374ndxm7er1c24jgjc2zax7tstrsw8zq3niv8q9737yugczzqpge478zdrryelh24r4zj38nq3baej90kjzqhq623wb0z7r8bwb9s4m0xl02h78di1k489znvdbrfjg0u7kz3hg2lmni8asd11tbl8z57hnn730lcpvgj7mbmjuicwp3wg0gb7rubehxsiyygfnwq982rn5srlzgdld8ciu4zoj9htqy4jxsk4y8mrkeva8jnsgl2j7anq9cbczjocqb8ig4gkanuen1wq1e195ay0xyyi9l1ufv6x282ut2s6zu7zawxbxbpzry1z068wmfw8g60v0rk6o5v9ce22ke3yyria9zga6ry32m0osmis5lfvcfdu93g75z4pmqcxksah9tc == \1\8\o\c\4\n\b\w\u\g\5\3\7\r\m\m\t\y\z\g\b\t\7\h\m\y\t\9\9\8\p\i\e\3\u\5\o\5\o\5\5\8\f\j\e\d\o\2\5\q\m\m\0\6\i\j\4\u\m\4\m\i\e\e\c\8\i\p\y\w\u\m\g\z\p\j\m\c\5\1\g\3\d\z\p\0\8\u\6\1\k\1\q\l\t\c\d\o\p\d\x\y\o\q\m\o\4\v\o\s\l\s\a\c\c\i\0\e\l\3\7\4\n\d\x\m\7\e\r\1\c\2\4\j\g\j\c\2\z\a\x\7\t\s\t\r\s\w\8\z\q\3\n\i\v\8\q\9\7\3\7\y\u\g\c\z\z\q\p\g\e\4\7\8\z\d\r\r\y\e\l\h\2\4\r\4\z\j\3\8\n\q\3\b\a\e\j\9\0\k\j\z\q\h\q\6\2\3\w\b\0\z\7\r\8\b\w\b\9\s\4\m\0\x\l\0\2\h\7\8\d\i\1\k\4\8\9\z\n\v\d\b\r\f\j\g\0\u\7\k\z\3\h\g\2\l\m\n\i\8\a\s\d\1\1\t\b\l\8\z\5\7\h\n\n\7\3\0\l\c\p\v\g\j\7\m\b\m\j\u\i\c\w\p\3\w\g\0\g\b\7\r\u\b\e\h\x\s\i\y\y\g\f\n\w\q\9\8\2\r\n\5\s\r\l\z\g\d\l\d\8\c\i\u\4\z\o\j\9\h\t\q\y\4\j\x\s\k\4\y\8\m\r\k\e\v\a\8\j\n\s\g\l\2\j\7\a\n\q\9\c\b\c\z\j\o\c\q\b\8\i\g\4\g\k\a\n\u\e\n\1\w\q\1\e\1\9\5\a\y\0\x\y\y\i\9\l\1\u\f\v\6\x\2\8\2\u\t\2\s\6\z\u\7\z\a\w\x\b\x\b\p\z\r\y\1\z\0\6\8\w\m\f\w\8\g\6\0\v\0\r\k\6\o\5\v\9\c\e\2\2\k\e\3\y\y\r\i\a\9\z\g\a\6\r\y\3\2\m\0\o\s\m\i\s\5\l\f\v\c\f\d\u\9\3\g\7\5\z\4\p\m\q\c\x\k\s\a\h\9\t\c ]] 00:07:36.870 10:31:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:36.870 10:31:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:36.870 [2024-11-15 10:31:02.216977] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:36.870 [2024-11-15 10:31:02.217129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60655 ] 00:07:37.129 [2024-11-15 10:31:02.371324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.129 [2024-11-15 10:31:02.430281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.129 [2024-11-15 10:31:02.490031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.129  [2024-11-15T10:31:02.886Z] Copying: 512/512 [B] (average 166 kBps) 00:07:37.388 00:07:37.388 10:31:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 18oc4nbwug537rmmtyzgbt7hmyt998pie3u5o5o558fjedo25qmm06ij4um4mieec8ipywumgzpjmc51g3dzp08u61k1qltcdopdxyoqmo4voslsacci0el374ndxm7er1c24jgjc2zax7tstrsw8zq3niv8q9737yugczzqpge478zdrryelh24r4zj38nq3baej90kjzqhq623wb0z7r8bwb9s4m0xl02h78di1k489znvdbrfjg0u7kz3hg2lmni8asd11tbl8z57hnn730lcpvgj7mbmjuicwp3wg0gb7rubehxsiyygfnwq982rn5srlzgdld8ciu4zoj9htqy4jxsk4y8mrkeva8jnsgl2j7anq9cbczjocqb8ig4gkanuen1wq1e195ay0xyyi9l1ufv6x282ut2s6zu7zawxbxbpzry1z068wmfw8g60v0rk6o5v9ce22ke3yyria9zga6ry32m0osmis5lfvcfdu93g75z4pmqcxksah9tc == \1\8\o\c\4\n\b\w\u\g\5\3\7\r\m\m\t\y\z\g\b\t\7\h\m\y\t\9\9\8\p\i\e\3\u\5\o\5\o\5\5\8\f\j\e\d\o\2\5\q\m\m\0\6\i\j\4\u\m\4\m\i\e\e\c\8\i\p\y\w\u\m\g\z\p\j\m\c\5\1\g\3\d\z\p\0\8\u\6\1\k\1\q\l\t\c\d\o\p\d\x\y\o\q\m\o\4\v\o\s\l\s\a\c\c\i\0\e\l\3\7\4\n\d\x\m\7\e\r\1\c\2\4\j\g\j\c\2\z\a\x\7\t\s\t\r\s\w\8\z\q\3\n\i\v\8\q\9\7\3\7\y\u\g\c\z\z\q\p\g\e\4\7\8\z\d\r\r\y\e\l\h\2\4\r\4\z\j\3\8\n\q\3\b\a\e\j\9\0\k\j\z\q\h\q\6\2\3\w\b\0\z\7\r\8\b\w\b\9\s\4\m\0\x\l\0\2\h\7\8\d\i\1\k\4\8\9\z\n\v\d\b\r\f\j\g\0\u\7\k\z\3\h\g\2\l\m\n\i\8\a\s\d\1\1\t\b\l\8\z\5\7\h\n\n\7\3\0\l\c\p\v\g\j\7\m\b\m\j\u\i\c\w\p\3\w\g\0\g\b\7\r\u\b\e\h\x\s\i\y\y\g\f\n\w\q\9\8\2\r\n\5\s\r\l\z\g\d\l\d\8\c\i\u\4\z\o\j\9\h\t\q\y\4\j\x\s\k\4\y\8\m\r\k\e\v\a\8\j\n\s\g\l\2\j\7\a\n\q\9\c\b\c\z\j\o\c\q\b\8\i\g\4\g\k\a\n\u\e\n\1\w\q\1\e\1\9\5\a\y\0\x\y\y\i\9\l\1\u\f\v\6\x\2\8\2\u\t\2\s\6\z\u\7\z\a\w\x\b\x\b\p\z\r\y\1\z\0\6\8\w\m\f\w\8\g\6\0\v\0\r\k\6\o\5\v\9\c\e\2\2\k\e\3\y\y\r\i\a\9\z\g\a\6\r\y\3\2\m\0\o\s\m\i\s\5\l\f\v\c\f\d\u\9\3\g\7\5\z\4\p\m\q\c\x\k\s\a\h\9\t\c ]] 00:07:37.388 10:31:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:37.388 10:31:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:37.388 [2024-11-15 10:31:02.799825] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:37.388 [2024-11-15 10:31:02.799934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60661 ] 00:07:37.647 [2024-11-15 10:31:02.949710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.647 [2024-11-15 10:31:03.015286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.647 [2024-11-15 10:31:03.074695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.647  [2024-11-15T10:31:03.404Z] Copying: 512/512 [B] (average 500 kBps) 00:07:37.906 00:07:37.907 10:31:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 18oc4nbwug537rmmtyzgbt7hmyt998pie3u5o5o558fjedo25qmm06ij4um4mieec8ipywumgzpjmc51g3dzp08u61k1qltcdopdxyoqmo4voslsacci0el374ndxm7er1c24jgjc2zax7tstrsw8zq3niv8q9737yugczzqpge478zdrryelh24r4zj38nq3baej90kjzqhq623wb0z7r8bwb9s4m0xl02h78di1k489znvdbrfjg0u7kz3hg2lmni8asd11tbl8z57hnn730lcpvgj7mbmjuicwp3wg0gb7rubehxsiyygfnwq982rn5srlzgdld8ciu4zoj9htqy4jxsk4y8mrkeva8jnsgl2j7anq9cbczjocqb8ig4gkanuen1wq1e195ay0xyyi9l1ufv6x282ut2s6zu7zawxbxbpzry1z068wmfw8g60v0rk6o5v9ce22ke3yyria9zga6ry32m0osmis5lfvcfdu93g75z4pmqcxksah9tc == \1\8\o\c\4\n\b\w\u\g\5\3\7\r\m\m\t\y\z\g\b\t\7\h\m\y\t\9\9\8\p\i\e\3\u\5\o\5\o\5\5\8\f\j\e\d\o\2\5\q\m\m\0\6\i\j\4\u\m\4\m\i\e\e\c\8\i\p\y\w\u\m\g\z\p\j\m\c\5\1\g\3\d\z\p\0\8\u\6\1\k\1\q\l\t\c\d\o\p\d\x\y\o\q\m\o\4\v\o\s\l\s\a\c\c\i\0\e\l\3\7\4\n\d\x\m\7\e\r\1\c\2\4\j\g\j\c\2\z\a\x\7\t\s\t\r\s\w\8\z\q\3\n\i\v\8\q\9\7\3\7\y\u\g\c\z\z\q\p\g\e\4\7\8\z\d\r\r\y\e\l\h\2\4\r\4\z\j\3\8\n\q\3\b\a\e\j\9\0\k\j\z\q\h\q\6\2\3\w\b\0\z\7\r\8\b\w\b\9\s\4\m\0\x\l\0\2\h\7\8\d\i\1\k\4\8\9\z\n\v\d\b\r\f\j\g\0\u\7\k\z\3\h\g\2\l\m\n\i\8\a\s\d\1\1\t\b\l\8\z\5\7\h\n\n\7\3\0\l\c\p\v\g\j\7\m\b\m\j\u\i\c\w\p\3\w\g\0\g\b\7\r\u\b\e\h\x\s\i\y\y\g\f\n\w\q\9\8\2\r\n\5\s\r\l\z\g\d\l\d\8\c\i\u\4\z\o\j\9\h\t\q\y\4\j\x\s\k\4\y\8\m\r\k\e\v\a\8\j\n\s\g\l\2\j\7\a\n\q\9\c\b\c\z\j\o\c\q\b\8\i\g\4\g\k\a\n\u\e\n\1\w\q\1\e\1\9\5\a\y\0\x\y\y\i\9\l\1\u\f\v\6\x\2\8\2\u\t\2\s\6\z\u\7\z\a\w\x\b\x\b\p\z\r\y\1\z\0\6\8\w\m\f\w\8\g\6\0\v\0\r\k\6\o\5\v\9\c\e\2\2\k\e\3\y\y\r\i\a\9\z\g\a\6\r\y\3\2\m\0\o\s\m\i\s\5\l\f\v\c\f\d\u\9\3\g\7\5\z\4\p\m\q\c\x\k\s\a\h\9\t\c ]] 00:07:37.907 10:31:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:37.907 10:31:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:37.907 10:31:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:37.907 10:31:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:37.907 10:31:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:37.907 10:31:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:38.166 [2024-11-15 10:31:03.406325] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:38.166 [2024-11-15 10:31:03.406431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60670 ] 00:07:38.166 [2024-11-15 10:31:03.552329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.166 [2024-11-15 10:31:03.606462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.425 [2024-11-15 10:31:03.664830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.425  [2024-11-15T10:31:03.923Z] Copying: 512/512 [B] (average 500 kBps) 00:07:38.425 00:07:38.684 10:31:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dpiyc3g23wyzryql9puj9fhyabqaoq70z3fo4kxp9a7vtszde26gb1br81178saxo5lsdjs51874ernqcg6ai9kcnmsfqn5vnyilcq4j9iu76uylixu1t2cby4s7vu86z3xuxdg2026nmmm4s53j1jqrwf22ijwskaw3snydwmnvxw2ia6d19buxroglns5c4zlfmwtzkjcnow0drbdghlw2jymalyw0qxyza7i63ayfn10gvnko7fzwiwbw3c5g85vwe041wxvssxeiso91ivgy79gdsu9detd6l6qy91susgabonmd2yao6jxw21owwjvdyiqy3o2qgdloetk6deexfamglcs8mwp3d1v7rjnf0g52qruxdhnuaj7t3oc9jynxp214q8orfxyxa25fgbouod8jw8eloowgr2y6op94vmkz1wfnmwdycf7b0hwdjttf5qtmeo57flixwmftime0b97ewezfrvm421i4zl8bdbfgh79cfv9tnkta2iq2 == \d\p\i\y\c\3\g\2\3\w\y\z\r\y\q\l\9\p\u\j\9\f\h\y\a\b\q\a\o\q\7\0\z\3\f\o\4\k\x\p\9\a\7\v\t\s\z\d\e\2\6\g\b\1\b\r\8\1\1\7\8\s\a\x\o\5\l\s\d\j\s\5\1\8\7\4\e\r\n\q\c\g\6\a\i\9\k\c\n\m\s\f\q\n\5\v\n\y\i\l\c\q\4\j\9\i\u\7\6\u\y\l\i\x\u\1\t\2\c\b\y\4\s\7\v\u\8\6\z\3\x\u\x\d\g\2\0\2\6\n\m\m\m\4\s\5\3\j\1\j\q\r\w\f\2\2\i\j\w\s\k\a\w\3\s\n\y\d\w\m\n\v\x\w\2\i\a\6\d\1\9\b\u\x\r\o\g\l\n\s\5\c\4\z\l\f\m\w\t\z\k\j\c\n\o\w\0\d\r\b\d\g\h\l\w\2\j\y\m\a\l\y\w\0\q\x\y\z\a\7\i\6\3\a\y\f\n\1\0\g\v\n\k\o\7\f\z\w\i\w\b\w\3\c\5\g\8\5\v\w\e\0\4\1\w\x\v\s\s\x\e\i\s\o\9\1\i\v\g\y\7\9\g\d\s\u\9\d\e\t\d\6\l\6\q\y\9\1\s\u\s\g\a\b\o\n\m\d\2\y\a\o\6\j\x\w\2\1\o\w\w\j\v\d\y\i\q\y\3\o\2\q\g\d\l\o\e\t\k\6\d\e\e\x\f\a\m\g\l\c\s\8\m\w\p\3\d\1\v\7\r\j\n\f\0\g\5\2\q\r\u\x\d\h\n\u\a\j\7\t\3\o\c\9\j\y\n\x\p\2\1\4\q\8\o\r\f\x\y\x\a\2\5\f\g\b\o\u\o\d\8\j\w\8\e\l\o\o\w\g\r\2\y\6\o\p\9\4\v\m\k\z\1\w\f\n\m\w\d\y\c\f\7\b\0\h\w\d\j\t\t\f\5\q\t\m\e\o\5\7\f\l\i\x\w\m\f\t\i\m\e\0\b\9\7\e\w\e\z\f\r\v\m\4\2\1\i\4\z\l\8\b\d\b\f\g\h\7\9\c\f\v\9\t\n\k\t\a\2\i\q\2 ]] 00:07:38.684 10:31:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:38.684 10:31:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:38.684 [2024-11-15 10:31:03.981031] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:38.684 [2024-11-15 10:31:03.981131] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60678 ] 00:07:38.684 [2024-11-15 10:31:04.128277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.943 [2024-11-15 10:31:04.190711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.943 [2024-11-15 10:31:04.250047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.943  [2024-11-15T10:31:04.700Z] Copying: 512/512 [B] (average 500 kBps) 00:07:39.202 00:07:39.202 10:31:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dpiyc3g23wyzryql9puj9fhyabqaoq70z3fo4kxp9a7vtszde26gb1br81178saxo5lsdjs51874ernqcg6ai9kcnmsfqn5vnyilcq4j9iu76uylixu1t2cby4s7vu86z3xuxdg2026nmmm4s53j1jqrwf22ijwskaw3snydwmnvxw2ia6d19buxroglns5c4zlfmwtzkjcnow0drbdghlw2jymalyw0qxyza7i63ayfn10gvnko7fzwiwbw3c5g85vwe041wxvssxeiso91ivgy79gdsu9detd6l6qy91susgabonmd2yao6jxw21owwjvdyiqy3o2qgdloetk6deexfamglcs8mwp3d1v7rjnf0g52qruxdhnuaj7t3oc9jynxp214q8orfxyxa25fgbouod8jw8eloowgr2y6op94vmkz1wfnmwdycf7b0hwdjttf5qtmeo57flixwmftime0b97ewezfrvm421i4zl8bdbfgh79cfv9tnkta2iq2 == \d\p\i\y\c\3\g\2\3\w\y\z\r\y\q\l\9\p\u\j\9\f\h\y\a\b\q\a\o\q\7\0\z\3\f\o\4\k\x\p\9\a\7\v\t\s\z\d\e\2\6\g\b\1\b\r\8\1\1\7\8\s\a\x\o\5\l\s\d\j\s\5\1\8\7\4\e\r\n\q\c\g\6\a\i\9\k\c\n\m\s\f\q\n\5\v\n\y\i\l\c\q\4\j\9\i\u\7\6\u\y\l\i\x\u\1\t\2\c\b\y\4\s\7\v\u\8\6\z\3\x\u\x\d\g\2\0\2\6\n\m\m\m\4\s\5\3\j\1\j\q\r\w\f\2\2\i\j\w\s\k\a\w\3\s\n\y\d\w\m\n\v\x\w\2\i\a\6\d\1\9\b\u\x\r\o\g\l\n\s\5\c\4\z\l\f\m\w\t\z\k\j\c\n\o\w\0\d\r\b\d\g\h\l\w\2\j\y\m\a\l\y\w\0\q\x\y\z\a\7\i\6\3\a\y\f\n\1\0\g\v\n\k\o\7\f\z\w\i\w\b\w\3\c\5\g\8\5\v\w\e\0\4\1\w\x\v\s\s\x\e\i\s\o\9\1\i\v\g\y\7\9\g\d\s\u\9\d\e\t\d\6\l\6\q\y\9\1\s\u\s\g\a\b\o\n\m\d\2\y\a\o\6\j\x\w\2\1\o\w\w\j\v\d\y\i\q\y\3\o\2\q\g\d\l\o\e\t\k\6\d\e\e\x\f\a\m\g\l\c\s\8\m\w\p\3\d\1\v\7\r\j\n\f\0\g\5\2\q\r\u\x\d\h\n\u\a\j\7\t\3\o\c\9\j\y\n\x\p\2\1\4\q\8\o\r\f\x\y\x\a\2\5\f\g\b\o\u\o\d\8\j\w\8\e\l\o\o\w\g\r\2\y\6\o\p\9\4\v\m\k\z\1\w\f\n\m\w\d\y\c\f\7\b\0\h\w\d\j\t\t\f\5\q\t\m\e\o\5\7\f\l\i\x\w\m\f\t\i\m\e\0\b\9\7\e\w\e\z\f\r\v\m\4\2\1\i\4\z\l\8\b\d\b\f\g\h\7\9\c\f\v\9\t\n\k\t\a\2\i\q\2 ]] 00:07:39.202 10:31:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:39.202 10:31:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:39.202 [2024-11-15 10:31:04.575775] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:39.202 [2024-11-15 10:31:04.576096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60685 ] 00:07:39.461 [2024-11-15 10:31:04.724755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.461 [2024-11-15 10:31:04.789492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.461 [2024-11-15 10:31:04.847247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.461  [2024-11-15T10:31:05.219Z] Copying: 512/512 [B] (average 250 kBps) 00:07:39.721 00:07:39.721 10:31:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dpiyc3g23wyzryql9puj9fhyabqaoq70z3fo4kxp9a7vtszde26gb1br81178saxo5lsdjs51874ernqcg6ai9kcnmsfqn5vnyilcq4j9iu76uylixu1t2cby4s7vu86z3xuxdg2026nmmm4s53j1jqrwf22ijwskaw3snydwmnvxw2ia6d19buxroglns5c4zlfmwtzkjcnow0drbdghlw2jymalyw0qxyza7i63ayfn10gvnko7fzwiwbw3c5g85vwe041wxvssxeiso91ivgy79gdsu9detd6l6qy91susgabonmd2yao6jxw21owwjvdyiqy3o2qgdloetk6deexfamglcs8mwp3d1v7rjnf0g52qruxdhnuaj7t3oc9jynxp214q8orfxyxa25fgbouod8jw8eloowgr2y6op94vmkz1wfnmwdycf7b0hwdjttf5qtmeo57flixwmftime0b97ewezfrvm421i4zl8bdbfgh79cfv9tnkta2iq2 == \d\p\i\y\c\3\g\2\3\w\y\z\r\y\q\l\9\p\u\j\9\f\h\y\a\b\q\a\o\q\7\0\z\3\f\o\4\k\x\p\9\a\7\v\t\s\z\d\e\2\6\g\b\1\b\r\8\1\1\7\8\s\a\x\o\5\l\s\d\j\s\5\1\8\7\4\e\r\n\q\c\g\6\a\i\9\k\c\n\m\s\f\q\n\5\v\n\y\i\l\c\q\4\j\9\i\u\7\6\u\y\l\i\x\u\1\t\2\c\b\y\4\s\7\v\u\8\6\z\3\x\u\x\d\g\2\0\2\6\n\m\m\m\4\s\5\3\j\1\j\q\r\w\f\2\2\i\j\w\s\k\a\w\3\s\n\y\d\w\m\n\v\x\w\2\i\a\6\d\1\9\b\u\x\r\o\g\l\n\s\5\c\4\z\l\f\m\w\t\z\k\j\c\n\o\w\0\d\r\b\d\g\h\l\w\2\j\y\m\a\l\y\w\0\q\x\y\z\a\7\i\6\3\a\y\f\n\1\0\g\v\n\k\o\7\f\z\w\i\w\b\w\3\c\5\g\8\5\v\w\e\0\4\1\w\x\v\s\s\x\e\i\s\o\9\1\i\v\g\y\7\9\g\d\s\u\9\d\e\t\d\6\l\6\q\y\9\1\s\u\s\g\a\b\o\n\m\d\2\y\a\o\6\j\x\w\2\1\o\w\w\j\v\d\y\i\q\y\3\o\2\q\g\d\l\o\e\t\k\6\d\e\e\x\f\a\m\g\l\c\s\8\m\w\p\3\d\1\v\7\r\j\n\f\0\g\5\2\q\r\u\x\d\h\n\u\a\j\7\t\3\o\c\9\j\y\n\x\p\2\1\4\q\8\o\r\f\x\y\x\a\2\5\f\g\b\o\u\o\d\8\j\w\8\e\l\o\o\w\g\r\2\y\6\o\p\9\4\v\m\k\z\1\w\f\n\m\w\d\y\c\f\7\b\0\h\w\d\j\t\t\f\5\q\t\m\e\o\5\7\f\l\i\x\w\m\f\t\i\m\e\0\b\9\7\e\w\e\z\f\r\v\m\4\2\1\i\4\z\l\8\b\d\b\f\g\h\7\9\c\f\v\9\t\n\k\t\a\2\i\q\2 ]] 00:07:39.721 10:31:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:39.721 10:31:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:39.721 [2024-11-15 10:31:05.168093] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:39.721 [2024-11-15 10:31:05.168238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60698 ] 00:07:40.008 [2024-11-15 10:31:05.314288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.008 [2024-11-15 10:31:05.376574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.008 [2024-11-15 10:31:05.434206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.008  [2024-11-15T10:31:05.765Z] Copying: 512/512 [B] (average 500 kBps) 00:07:40.267 00:07:40.267 10:31:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dpiyc3g23wyzryql9puj9fhyabqaoq70z3fo4kxp9a7vtszde26gb1br81178saxo5lsdjs51874ernqcg6ai9kcnmsfqn5vnyilcq4j9iu76uylixu1t2cby4s7vu86z3xuxdg2026nmmm4s53j1jqrwf22ijwskaw3snydwmnvxw2ia6d19buxroglns5c4zlfmwtzkjcnow0drbdghlw2jymalyw0qxyza7i63ayfn10gvnko7fzwiwbw3c5g85vwe041wxvssxeiso91ivgy79gdsu9detd6l6qy91susgabonmd2yao6jxw21owwjvdyiqy3o2qgdloetk6deexfamglcs8mwp3d1v7rjnf0g52qruxdhnuaj7t3oc9jynxp214q8orfxyxa25fgbouod8jw8eloowgr2y6op94vmkz1wfnmwdycf7b0hwdjttf5qtmeo57flixwmftime0b97ewezfrvm421i4zl8bdbfgh79cfv9tnkta2iq2 == \d\p\i\y\c\3\g\2\3\w\y\z\r\y\q\l\9\p\u\j\9\f\h\y\a\b\q\a\o\q\7\0\z\3\f\o\4\k\x\p\9\a\7\v\t\s\z\d\e\2\6\g\b\1\b\r\8\1\1\7\8\s\a\x\o\5\l\s\d\j\s\5\1\8\7\4\e\r\n\q\c\g\6\a\i\9\k\c\n\m\s\f\q\n\5\v\n\y\i\l\c\q\4\j\9\i\u\7\6\u\y\l\i\x\u\1\t\2\c\b\y\4\s\7\v\u\8\6\z\3\x\u\x\d\g\2\0\2\6\n\m\m\m\4\s\5\3\j\1\j\q\r\w\f\2\2\i\j\w\s\k\a\w\3\s\n\y\d\w\m\n\v\x\w\2\i\a\6\d\1\9\b\u\x\r\o\g\l\n\s\5\c\4\z\l\f\m\w\t\z\k\j\c\n\o\w\0\d\r\b\d\g\h\l\w\2\j\y\m\a\l\y\w\0\q\x\y\z\a\7\i\6\3\a\y\f\n\1\0\g\v\n\k\o\7\f\z\w\i\w\b\w\3\c\5\g\8\5\v\w\e\0\4\1\w\x\v\s\s\x\e\i\s\o\9\1\i\v\g\y\7\9\g\d\s\u\9\d\e\t\d\6\l\6\q\y\9\1\s\u\s\g\a\b\o\n\m\d\2\y\a\o\6\j\x\w\2\1\o\w\w\j\v\d\y\i\q\y\3\o\2\q\g\d\l\o\e\t\k\6\d\e\e\x\f\a\m\g\l\c\s\8\m\w\p\3\d\1\v\7\r\j\n\f\0\g\5\2\q\r\u\x\d\h\n\u\a\j\7\t\3\o\c\9\j\y\n\x\p\2\1\4\q\8\o\r\f\x\y\x\a\2\5\f\g\b\o\u\o\d\8\j\w\8\e\l\o\o\w\g\r\2\y\6\o\p\9\4\v\m\k\z\1\w\f\n\m\w\d\y\c\f\7\b\0\h\w\d\j\t\t\f\5\q\t\m\e\o\5\7\f\l\i\x\w\m\f\t\i\m\e\0\b\9\7\e\w\e\z\f\r\v\m\4\2\1\i\4\z\l\8\b\d\b\f\g\h\7\9\c\f\v\9\t\n\k\t\a\2\i\q\2 ]] 00:07:40.267 00:07:40.267 real 0m4.725s 00:07:40.267 user 0m2.572s 00:07:40.267 sys 0m1.175s 00:07:40.267 10:31:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:40.267 ************************************ 00:07:40.267 END TEST dd_flags_misc_forced_aio 00:07:40.267 ************************************ 00:07:40.267 10:31:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:40.267 10:31:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:40.267 10:31:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:40.267 10:31:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:40.267 ************************************ 00:07:40.267 END TEST spdk_dd_posix 00:07:40.267 ************************************ 00:07:40.267 00:07:40.267 real 0m21.274s 00:07:40.267 user 0m10.488s 00:07:40.267 sys 0m6.739s 00:07:40.267 10:31:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:40.267 10:31:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:40.526 10:31:05 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:40.526 10:31:05 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:40.526 10:31:05 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:40.526 10:31:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:40.526 ************************************ 00:07:40.526 START TEST spdk_dd_malloc 00:07:40.526 ************************************ 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:40.526 * Looking for test storage... 00:07:40.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:40.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.526 --rc genhtml_branch_coverage=1 00:07:40.526 --rc genhtml_function_coverage=1 00:07:40.526 --rc genhtml_legend=1 00:07:40.526 --rc geninfo_all_blocks=1 00:07:40.526 --rc geninfo_unexecuted_blocks=1 00:07:40.526 00:07:40.526 ' 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:40.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.526 --rc genhtml_branch_coverage=1 00:07:40.526 --rc genhtml_function_coverage=1 00:07:40.526 --rc genhtml_legend=1 00:07:40.526 --rc geninfo_all_blocks=1 00:07:40.526 --rc geninfo_unexecuted_blocks=1 00:07:40.526 00:07:40.526 ' 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:40.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.526 --rc genhtml_branch_coverage=1 00:07:40.526 --rc genhtml_function_coverage=1 00:07:40.526 --rc genhtml_legend=1 00:07:40.526 --rc geninfo_all_blocks=1 00:07:40.526 --rc geninfo_unexecuted_blocks=1 00:07:40.526 00:07:40.526 ' 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:40.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.526 --rc genhtml_branch_coverage=1 00:07:40.526 --rc genhtml_function_coverage=1 00:07:40.526 --rc genhtml_legend=1 00:07:40.526 --rc geninfo_all_blocks=1 00:07:40.526 --rc geninfo_unexecuted_blocks=1 00:07:40.526 00:07:40.526 ' 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.526 10:31:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:40.527 10:31:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.527 10:31:05 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:40.527 10:31:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:40.527 10:31:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:40.527 10:31:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:40.527 ************************************ 00:07:40.527 START TEST dd_malloc_copy 00:07:40.527 ************************************ 00:07:40.527 10:31:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1127 -- # malloc_copy 00:07:40.527 10:31:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:40.527 10:31:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:40.527 10:31:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:40.527 10:31:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:40.527 10:31:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:40.527 10:31:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:40.527 10:31:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:40.527 10:31:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:40.527 10:31:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:40.527 10:31:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:40.785 [2024-11-15 10:31:06.066358] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:40.785 [2024-11-15 10:31:06.066666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60780 ] 00:07:40.785 { 00:07:40.785 "subsystems": [ 00:07:40.785 { 00:07:40.785 "subsystem": "bdev", 00:07:40.785 "config": [ 00:07:40.785 { 00:07:40.785 "params": { 00:07:40.785 "block_size": 512, 00:07:40.785 "num_blocks": 1048576, 00:07:40.785 "name": "malloc0" 00:07:40.785 }, 00:07:40.785 "method": "bdev_malloc_create" 00:07:40.785 }, 00:07:40.785 { 00:07:40.785 "params": { 00:07:40.785 "block_size": 512, 00:07:40.785 "num_blocks": 1048576, 00:07:40.785 "name": "malloc1" 00:07:40.785 }, 00:07:40.785 "method": "bdev_malloc_create" 00:07:40.785 }, 00:07:40.785 { 00:07:40.785 "method": "bdev_wait_for_examine" 00:07:40.785 } 00:07:40.785 ] 00:07:40.785 } 00:07:40.785 ] 00:07:40.785 } 00:07:40.785 [2024-11-15 10:31:06.220255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.044 [2024-11-15 10:31:06.294155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.044 [2024-11-15 10:31:06.359981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.420  [2024-11-15T10:31:08.852Z] Copying: 186/512 [MB] (186 MBps) [2024-11-15T10:31:09.788Z] Copying: 373/512 [MB] (187 MBps) [2024-11-15T10:31:10.047Z] Copying: 512/512 [MB] (average 189 MBps) 00:07:44.549 00:07:44.549 10:31:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:44.549 10:31:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:44.549 10:31:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:44.549 10:31:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:44.807 [2024-11-15 10:31:10.091537] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:44.807 [2024-11-15 10:31:10.091662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60822 ] 00:07:44.807 { 00:07:44.807 "subsystems": [ 00:07:44.807 { 00:07:44.807 "subsystem": "bdev", 00:07:44.807 "config": [ 00:07:44.807 { 00:07:44.807 "params": { 00:07:44.807 "block_size": 512, 00:07:44.807 "num_blocks": 1048576, 00:07:44.807 "name": "malloc0" 00:07:44.807 }, 00:07:44.807 "method": "bdev_malloc_create" 00:07:44.807 }, 00:07:44.807 { 00:07:44.807 "params": { 00:07:44.807 "block_size": 512, 00:07:44.807 "num_blocks": 1048576, 00:07:44.807 "name": "malloc1" 00:07:44.807 }, 00:07:44.807 "method": "bdev_malloc_create" 00:07:44.807 }, 00:07:44.807 { 00:07:44.807 "method": "bdev_wait_for_examine" 00:07:44.807 } 00:07:44.807 ] 00:07:44.807 } 00:07:44.807 ] 00:07:44.807 } 00:07:44.807 [2024-11-15 10:31:10.238281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.066 [2024-11-15 10:31:10.303289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.066 [2024-11-15 10:31:10.359944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.439  [2024-11-15T10:31:12.873Z] Copying: 195/512 [MB] (195 MBps) [2024-11-15T10:31:13.441Z] Copying: 392/512 [MB] (196 MBps) [2024-11-15T10:31:14.006Z] Copying: 512/512 [MB] (average 196 MBps) 00:07:48.508 00:07:48.508 00:07:48.508 real 0m7.869s 00:07:48.508 user 0m6.820s 00:07:48.508 sys 0m0.881s 00:07:48.508 10:31:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:48.508 ************************************ 00:07:48.508 END TEST dd_malloc_copy 00:07:48.508 ************************************ 00:07:48.508 10:31:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:48.508 ************************************ 00:07:48.508 END TEST spdk_dd_malloc 00:07:48.508 ************************************ 00:07:48.508 00:07:48.508 real 0m8.108s 00:07:48.508 user 0m6.957s 00:07:48.508 sys 0m0.982s 00:07:48.508 10:31:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:48.508 10:31:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:48.508 10:31:13 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:48.508 10:31:13 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:48.508 10:31:13 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:48.508 10:31:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:48.508 ************************************ 00:07:48.508 START TEST spdk_dd_bdev_to_bdev 00:07:48.508 ************************************ 00:07:48.508 10:31:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:48.767 * Looking for test storage... 00:07:48.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:48.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.767 --rc genhtml_branch_coverage=1 00:07:48.767 --rc genhtml_function_coverage=1 00:07:48.767 --rc genhtml_legend=1 00:07:48.767 --rc geninfo_all_blocks=1 00:07:48.767 --rc geninfo_unexecuted_blocks=1 00:07:48.767 00:07:48.767 ' 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:48.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.767 --rc genhtml_branch_coverage=1 00:07:48.767 --rc genhtml_function_coverage=1 00:07:48.767 --rc genhtml_legend=1 00:07:48.767 --rc geninfo_all_blocks=1 00:07:48.767 --rc geninfo_unexecuted_blocks=1 00:07:48.767 00:07:48.767 ' 00:07:48.767 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:48.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.767 --rc genhtml_branch_coverage=1 00:07:48.768 --rc genhtml_function_coverage=1 00:07:48.768 --rc genhtml_legend=1 00:07:48.768 --rc geninfo_all_blocks=1 00:07:48.768 --rc geninfo_unexecuted_blocks=1 00:07:48.768 00:07:48.768 ' 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:48.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.768 --rc genhtml_branch_coverage=1 00:07:48.768 --rc genhtml_function_coverage=1 00:07:48.768 --rc genhtml_legend=1 00:07:48.768 --rc geninfo_all_blocks=1 00:07:48.768 --rc geninfo_unexecuted_blocks=1 00:07:48.768 00:07:48.768 ' 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:48.768 ************************************ 00:07:48.768 START TEST dd_inflate_file 00:07:48.768 ************************************ 00:07:48.768 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:48.768 [2024-11-15 10:31:14.215466] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:48.768 [2024-11-15 10:31:14.215585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60946 ] 00:07:49.027 [2024-11-15 10:31:14.369213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.027 [2024-11-15 10:31:14.440130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.027 [2024-11-15 10:31:14.496855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.286  [2024-11-15T10:31:14.784Z] Copying: 64/64 [MB] (average 1729 MBps) 00:07:49.286 00:07:49.286 00:07:49.286 real 0m0.605s 00:07:49.286 user 0m0.356s 00:07:49.286 sys 0m0.297s 00:07:49.286 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.286 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:49.286 ************************************ 00:07:49.286 END TEST dd_inflate_file 00:07:49.286 ************************************ 00:07:49.544 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:49.544 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:49.544 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:49.544 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:49.544 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:49.544 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:49.544 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:49.544 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.544 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:49.544 ************************************ 00:07:49.544 START TEST dd_copy_to_out_bdev 00:07:49.544 ************************************ 00:07:49.544 10:31:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:49.544 { 00:07:49.544 "subsystems": [ 00:07:49.544 { 00:07:49.544 "subsystem": "bdev", 00:07:49.544 "config": [ 00:07:49.544 { 00:07:49.544 "params": { 00:07:49.544 "trtype": "pcie", 00:07:49.544 "traddr": "0000:00:10.0", 00:07:49.544 "name": "Nvme0" 00:07:49.544 }, 00:07:49.544 "method": "bdev_nvme_attach_controller" 00:07:49.544 }, 00:07:49.544 { 00:07:49.544 "params": { 00:07:49.544 "trtype": "pcie", 00:07:49.544 "traddr": "0000:00:11.0", 00:07:49.544 "name": "Nvme1" 00:07:49.544 }, 00:07:49.544 "method": "bdev_nvme_attach_controller" 00:07:49.544 }, 00:07:49.544 { 00:07:49.544 "method": "bdev_wait_for_examine" 00:07:49.544 } 00:07:49.544 ] 00:07:49.544 } 00:07:49.544 ] 00:07:49.544 } 00:07:49.544 [2024-11-15 10:31:14.878660] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:49.544 [2024-11-15 10:31:14.878753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60981 ] 00:07:49.544 [2024-11-15 10:31:15.023898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.802 [2024-11-15 10:31:15.088917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.802 [2024-11-15 10:31:15.143710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.733  [2024-11-15T10:31:16.489Z] Copying: 64/64 [MB] (average 71 MBps) 00:07:50.991 00:07:50.991 ************************************ 00:07:50.991 END TEST dd_copy_to_out_bdev 00:07:50.991 ************************************ 00:07:50.991 00:07:50.991 real 0m1.630s 00:07:50.991 user 0m1.419s 00:07:50.991 sys 0m1.221s 00:07:50.991 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.991 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:51.249 ************************************ 00:07:51.249 START TEST dd_offset_magic 00:07:51.249 ************************************ 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1127 -- # offset_magic 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:51.249 10:31:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:51.249 { 00:07:51.249 "subsystems": [ 00:07:51.249 { 00:07:51.249 "subsystem": "bdev", 00:07:51.249 "config": [ 00:07:51.249 { 00:07:51.249 "params": { 00:07:51.249 "trtype": "pcie", 00:07:51.249 "traddr": "0000:00:10.0", 00:07:51.249 "name": "Nvme0" 00:07:51.249 }, 00:07:51.249 "method": "bdev_nvme_attach_controller" 00:07:51.249 }, 00:07:51.249 { 00:07:51.249 "params": { 00:07:51.249 "trtype": "pcie", 00:07:51.249 "traddr": "0000:00:11.0", 00:07:51.249 "name": "Nvme1" 00:07:51.249 }, 00:07:51.249 "method": "bdev_nvme_attach_controller" 00:07:51.249 }, 00:07:51.249 { 00:07:51.249 "method": "bdev_wait_for_examine" 00:07:51.249 } 00:07:51.249 ] 00:07:51.249 } 00:07:51.249 ] 00:07:51.249 } 00:07:51.249 [2024-11-15 10:31:16.568943] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:51.249 [2024-11-15 10:31:16.569111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61027 ] 00:07:51.249 [2024-11-15 10:31:16.710899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.508 [2024-11-15 10:31:16.776222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.508 [2024-11-15 10:31:16.831006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.766  [2024-11-15T10:31:17.523Z] Copying: 65/65 [MB] (average 1160 MBps) 00:07:52.025 00:07:52.025 10:31:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:52.025 10:31:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:52.025 10:31:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:52.025 10:31:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:52.025 { 00:07:52.025 "subsystems": [ 00:07:52.025 { 00:07:52.025 "subsystem": "bdev", 00:07:52.025 "config": [ 00:07:52.025 { 00:07:52.025 "params": { 00:07:52.025 "trtype": "pcie", 00:07:52.025 "traddr": "0000:00:10.0", 00:07:52.025 "name": "Nvme0" 00:07:52.025 }, 00:07:52.025 "method": "bdev_nvme_attach_controller" 00:07:52.025 }, 00:07:52.025 { 00:07:52.025 "params": { 00:07:52.025 "trtype": "pcie", 00:07:52.025 "traddr": "0000:00:11.0", 00:07:52.025 "name": "Nvme1" 00:07:52.025 }, 00:07:52.025 "method": "bdev_nvme_attach_controller" 00:07:52.025 }, 00:07:52.025 { 00:07:52.025 "method": "bdev_wait_for_examine" 00:07:52.025 } 00:07:52.025 ] 00:07:52.025 } 00:07:52.025 ] 00:07:52.025 } 00:07:52.025 [2024-11-15 10:31:17.372598] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:52.025 [2024-11-15 10:31:17.372731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61042 ] 00:07:52.283 [2024-11-15 10:31:17.520854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.283 [2024-11-15 10:31:17.605207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.283 [2024-11-15 10:31:17.666436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.540  [2024-11-15T10:31:18.296Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:52.798 00:07:52.798 10:31:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:52.798 10:31:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:52.798 10:31:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:52.798 10:31:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:52.798 10:31:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:52.798 10:31:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:52.798 10:31:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:52.798 [2024-11-15 10:31:18.110729] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:52.798 [2024-11-15 10:31:18.110856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61058 ] 00:07:52.798 { 00:07:52.798 "subsystems": [ 00:07:52.798 { 00:07:52.798 "subsystem": "bdev", 00:07:52.798 "config": [ 00:07:52.798 { 00:07:52.798 "params": { 00:07:52.798 "trtype": "pcie", 00:07:52.798 "traddr": "0000:00:10.0", 00:07:52.798 "name": "Nvme0" 00:07:52.798 }, 00:07:52.798 "method": "bdev_nvme_attach_controller" 00:07:52.798 }, 00:07:52.798 { 00:07:52.798 "params": { 00:07:52.798 "trtype": "pcie", 00:07:52.798 "traddr": "0000:00:11.0", 00:07:52.798 "name": "Nvme1" 00:07:52.798 }, 00:07:52.798 "method": "bdev_nvme_attach_controller" 00:07:52.798 }, 00:07:52.798 { 00:07:52.798 "method": "bdev_wait_for_examine" 00:07:52.798 } 00:07:52.798 ] 00:07:52.798 } 00:07:52.798 ] 00:07:52.798 } 00:07:52.798 [2024-11-15 10:31:18.261836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.056 [2024-11-15 10:31:18.327075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.056 [2024-11-15 10:31:18.388418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.316  [2024-11-15T10:31:19.074Z] Copying: 65/65 [MB] (average 1274 MBps) 00:07:53.576 00:07:53.576 10:31:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:53.576 10:31:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:53.576 10:31:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:53.576 10:31:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:53.576 [2024-11-15 10:31:18.925346] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:53.576 [2024-11-15 10:31:18.925446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61078 ] 00:07:53.576 { 00:07:53.576 "subsystems": [ 00:07:53.576 { 00:07:53.576 "subsystem": "bdev", 00:07:53.576 "config": [ 00:07:53.576 { 00:07:53.576 "params": { 00:07:53.576 "trtype": "pcie", 00:07:53.576 "traddr": "0000:00:10.0", 00:07:53.576 "name": "Nvme0" 00:07:53.576 }, 00:07:53.576 "method": "bdev_nvme_attach_controller" 00:07:53.576 }, 00:07:53.576 { 00:07:53.576 "params": { 00:07:53.576 "trtype": "pcie", 00:07:53.576 "traddr": "0000:00:11.0", 00:07:53.576 "name": "Nvme1" 00:07:53.576 }, 00:07:53.576 "method": "bdev_nvme_attach_controller" 00:07:53.576 }, 00:07:53.576 { 00:07:53.576 "method": "bdev_wait_for_examine" 00:07:53.576 } 00:07:53.576 ] 00:07:53.576 } 00:07:53.576 ] 00:07:53.576 } 00:07:53.576 [2024-11-15 10:31:19.069070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.834 [2024-11-15 10:31:19.132838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.834 [2024-11-15 10:31:19.187205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.092  [2024-11-15T10:31:19.590Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:54.092 00:07:54.092 ************************************ 00:07:54.092 END TEST dd_offset_magic 00:07:54.092 ************************************ 00:07:54.092 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:54.092 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:54.092 00:07:54.092 real 0m3.068s 00:07:54.092 user 0m2.187s 00:07:54.092 sys 0m0.917s 00:07:54.092 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.092 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:54.350 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:54.350 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:54.350 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:54.350 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:54.350 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:54.350 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:54.350 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:54.350 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:54.350 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:54.350 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:54.350 10:31:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:54.350 [2024-11-15 10:31:19.657576] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:54.350 [2024-11-15 10:31:19.657672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61114 ] 00:07:54.350 { 00:07:54.350 "subsystems": [ 00:07:54.350 { 00:07:54.350 "subsystem": "bdev", 00:07:54.350 "config": [ 00:07:54.350 { 00:07:54.350 "params": { 00:07:54.350 "trtype": "pcie", 00:07:54.350 "traddr": "0000:00:10.0", 00:07:54.350 "name": "Nvme0" 00:07:54.350 }, 00:07:54.350 "method": "bdev_nvme_attach_controller" 00:07:54.350 }, 00:07:54.350 { 00:07:54.350 "params": { 00:07:54.350 "trtype": "pcie", 00:07:54.350 "traddr": "0000:00:11.0", 00:07:54.350 "name": "Nvme1" 00:07:54.350 }, 00:07:54.350 "method": "bdev_nvme_attach_controller" 00:07:54.350 }, 00:07:54.350 { 00:07:54.350 "method": "bdev_wait_for_examine" 00:07:54.350 } 00:07:54.350 ] 00:07:54.350 } 00:07:54.350 ] 00:07:54.350 } 00:07:54.350 [2024-11-15 10:31:19.801137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.609 [2024-11-15 10:31:19.866608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.609 [2024-11-15 10:31:19.920470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.867  [2024-11-15T10:31:20.365Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:54.867 00:07:54.867 10:31:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:54.867 10:31:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:54.867 10:31:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:54.867 10:31:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:54.867 10:31:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:54.867 10:31:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:54.867 10:31:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:54.867 10:31:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:54.867 10:31:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:54.867 10:31:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:55.124 { 00:07:55.124 "subsystems": [ 00:07:55.124 { 00:07:55.124 "subsystem": "bdev", 00:07:55.124 "config": [ 00:07:55.124 { 00:07:55.124 "params": { 00:07:55.124 "trtype": "pcie", 00:07:55.124 "traddr": "0000:00:10.0", 00:07:55.124 "name": "Nvme0" 00:07:55.124 }, 00:07:55.124 "method": "bdev_nvme_attach_controller" 00:07:55.124 }, 00:07:55.124 { 00:07:55.124 "params": { 00:07:55.124 "trtype": "pcie", 00:07:55.124 "traddr": "0000:00:11.0", 00:07:55.124 "name": "Nvme1" 00:07:55.124 }, 00:07:55.124 "method": "bdev_nvme_attach_controller" 00:07:55.124 }, 00:07:55.124 { 00:07:55.124 "method": "bdev_wait_for_examine" 00:07:55.124 } 00:07:55.124 ] 00:07:55.124 } 00:07:55.124 ] 00:07:55.124 } 00:07:55.124 [2024-11-15 10:31:20.379079] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:55.124 [2024-11-15 10:31:20.379193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61131 ] 00:07:55.124 [2024-11-15 10:31:20.532141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.124 [2024-11-15 10:31:20.594956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.382 [2024-11-15 10:31:20.648714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.382  [2024-11-15T10:31:21.139Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:07:55.641 00:07:55.641 10:31:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:55.641 ************************************ 00:07:55.641 END TEST spdk_dd_bdev_to_bdev 00:07:55.641 ************************************ 00:07:55.641 00:07:55.641 real 0m7.093s 00:07:55.641 user 0m5.147s 00:07:55.641 sys 0m3.125s 00:07:55.641 10:31:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.641 10:31:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:55.641 10:31:21 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:55.641 10:31:21 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:55.641 10:31:21 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:55.641 10:31:21 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.641 10:31:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:55.641 ************************************ 00:07:55.641 START TEST spdk_dd_uring 00:07:55.641 ************************************ 00:07:55.641 10:31:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:55.898 * Looking for test storage... 00:07:55.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.898 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:55.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.899 --rc genhtml_branch_coverage=1 00:07:55.899 --rc genhtml_function_coverage=1 00:07:55.899 --rc genhtml_legend=1 00:07:55.899 --rc geninfo_all_blocks=1 00:07:55.899 --rc geninfo_unexecuted_blocks=1 00:07:55.899 00:07:55.899 ' 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:55.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.899 --rc genhtml_branch_coverage=1 00:07:55.899 --rc genhtml_function_coverage=1 00:07:55.899 --rc genhtml_legend=1 00:07:55.899 --rc geninfo_all_blocks=1 00:07:55.899 --rc geninfo_unexecuted_blocks=1 00:07:55.899 00:07:55.899 ' 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:55.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.899 --rc genhtml_branch_coverage=1 00:07:55.899 --rc genhtml_function_coverage=1 00:07:55.899 --rc genhtml_legend=1 00:07:55.899 --rc geninfo_all_blocks=1 00:07:55.899 --rc geninfo_unexecuted_blocks=1 00:07:55.899 00:07:55.899 ' 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:55.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.899 --rc genhtml_branch_coverage=1 00:07:55.899 --rc genhtml_function_coverage=1 00:07:55.899 --rc genhtml_legend=1 00:07:55.899 --rc geninfo_all_blocks=1 00:07:55.899 --rc geninfo_unexecuted_blocks=1 00:07:55.899 00:07:55.899 ' 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:55.899 ************************************ 00:07:55.899 START TEST dd_uring_copy 00:07:55.899 ************************************ 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1127 -- # uring_zram_copy 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=lzgh2mmjlnqt4n5cnnakgtiqehiu68kuivyltsxmr485j3hhvfxr1tu5avsil9s6ypg1m5yyv6qwzunadqn9ybnsxu4z637mjz9816f3bvfc4pv3wkxwkc03986md2f1cbdx8ph1wcuq20hup0s88bci5aloorupizddkkae8up56l27wlk3aqp5onshu9mq998q0rv9l2hw8nve2ou6yoj2ox11li82upaxegrh3lrga5hk32su1j9nx1y3hisdbmh3ga6mws1sjshj0cabbyqmmz0hff3yohi4rbysv90za77wd6smn15f68cnw3vuipsgzg806e8p1nbrwgadrpjzgm5k69fpf7w41lf87ap6g83n7e44e5uxrdh5tlavl9qyzj7td18r6j9008s9oxzbn7qlre535yt3ld151t06ah8hzp43u9q2zjlsdkoci323o4ftkfvz9epf6cnruio897jrmoy6aqe4fn392ttcwcejxqwi3ljk0zwkenchmlzt19nqsfme1twx6g44pxaxwkoc0sq7z0ktsxu2tzq3bjde3oi7fu5apvphgdx6bp316ldjsc8i72tlyvv679vwev8v0urueux0sm600y3elrqbonb9lde4kfjiyhyxj49xb9ggs40dyvju58fvp4t6qbk12syd66qfu4dxa4on6evms0cw98w6xufvjgtktqw848dbuiqebi0fc0yvrpcfyvtygoorkg25tlodyjepql36ylw3r9svpatpu0a3ws2opi270c9hg7wxvgx1sctfqno5vlenfarkb0y76hc00e5lay3wz2mrlz84x8swwh4jiaikzhdvvbakezobw3omj4rne9ov6o6mdk22502gpqm2iqsna64ydw4vk0xfu3x5dnojc31dwf40vql006qdkovdmgmr6nj6nqmochjox1mhc8s3xrtfsjyz1cnxhkbuwct62z00onp5pq8dnooxf12jq6n3es5una4zbwbprtfqw9kep0kfli71gwnu 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo lzgh2mmjlnqt4n5cnnakgtiqehiu68kuivyltsxmr485j3hhvfxr1tu5avsil9s6ypg1m5yyv6qwzunadqn9ybnsxu4z637mjz9816f3bvfc4pv3wkxwkc03986md2f1cbdx8ph1wcuq20hup0s88bci5aloorupizddkkae8up56l27wlk3aqp5onshu9mq998q0rv9l2hw8nve2ou6yoj2ox11li82upaxegrh3lrga5hk32su1j9nx1y3hisdbmh3ga6mws1sjshj0cabbyqmmz0hff3yohi4rbysv90za77wd6smn15f68cnw3vuipsgzg806e8p1nbrwgadrpjzgm5k69fpf7w41lf87ap6g83n7e44e5uxrdh5tlavl9qyzj7td18r6j9008s9oxzbn7qlre535yt3ld151t06ah8hzp43u9q2zjlsdkoci323o4ftkfvz9epf6cnruio897jrmoy6aqe4fn392ttcwcejxqwi3ljk0zwkenchmlzt19nqsfme1twx6g44pxaxwkoc0sq7z0ktsxu2tzq3bjde3oi7fu5apvphgdx6bp316ldjsc8i72tlyvv679vwev8v0urueux0sm600y3elrqbonb9lde4kfjiyhyxj49xb9ggs40dyvju58fvp4t6qbk12syd66qfu4dxa4on6evms0cw98w6xufvjgtktqw848dbuiqebi0fc0yvrpcfyvtygoorkg25tlodyjepql36ylw3r9svpatpu0a3ws2opi270c9hg7wxvgx1sctfqno5vlenfarkb0y76hc00e5lay3wz2mrlz84x8swwh4jiaikzhdvvbakezobw3omj4rne9ov6o6mdk22502gpqm2iqsna64ydw4vk0xfu3x5dnojc31dwf40vql006qdkovdmgmr6nj6nqmochjox1mhc8s3xrtfsjyz1cnxhkbuwct62z00onp5pq8dnooxf12jq6n3es5una4zbwbprtfqw9kep0kfli71gwnu 00:07:55.899 10:31:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:56.177 [2024-11-15 10:31:21.402357] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:56.177 [2024-11-15 10:31:21.402466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61209 ] 00:07:56.177 [2024-11-15 10:31:21.550557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.177 [2024-11-15 10:31:21.624218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.469 [2024-11-15 10:31:21.681294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.036  [2024-11-15T10:31:22.793Z] Copying: 511/511 [MB] (average 1368 MBps) 00:07:57.295 00:07:57.295 10:31:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:57.295 10:31:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:57.295 10:31:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:57.295 10:31:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:57.295 [2024-11-15 10:31:22.710982] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:57.295 [2024-11-15 10:31:22.711079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61225 ] 00:07:57.295 { 00:07:57.295 "subsystems": [ 00:07:57.295 { 00:07:57.295 "subsystem": "bdev", 00:07:57.295 "config": [ 00:07:57.295 { 00:07:57.295 "params": { 00:07:57.295 "block_size": 512, 00:07:57.295 "num_blocks": 1048576, 00:07:57.295 "name": "malloc0" 00:07:57.295 }, 00:07:57.295 "method": "bdev_malloc_create" 00:07:57.295 }, 00:07:57.295 { 00:07:57.295 "params": { 00:07:57.295 "filename": "/dev/zram1", 00:07:57.295 "name": "uring0" 00:07:57.295 }, 00:07:57.295 "method": "bdev_uring_create" 00:07:57.295 }, 00:07:57.295 { 00:07:57.295 "method": "bdev_wait_for_examine" 00:07:57.295 } 00:07:57.295 ] 00:07:57.295 } 00:07:57.295 ] 00:07:57.295 } 00:07:57.553 [2024-11-15 10:31:22.868501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.553 [2024-11-15 10:31:22.940624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.553 [2024-11-15 10:31:23.000887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.925  [2024-11-15T10:31:25.354Z] Copying: 206/512 [MB] (206 MBps) [2024-11-15T10:31:25.667Z] Copying: 429/512 [MB] (223 MBps) [2024-11-15T10:31:26.232Z] Copying: 512/512 [MB] (average 216 MBps) 00:08:00.734 00:08:00.734 10:31:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:00.734 10:31:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:00.734 10:31:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:00.734 10:31:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:00.734 { 00:08:00.734 "subsystems": [ 00:08:00.734 { 00:08:00.734 "subsystem": "bdev", 00:08:00.734 "config": [ 00:08:00.734 { 00:08:00.734 "params": { 00:08:00.734 "block_size": 512, 00:08:00.734 "num_blocks": 1048576, 00:08:00.734 "name": "malloc0" 00:08:00.734 }, 00:08:00.734 "method": "bdev_malloc_create" 00:08:00.734 }, 00:08:00.734 { 00:08:00.734 "params": { 00:08:00.734 "filename": "/dev/zram1", 00:08:00.734 "name": "uring0" 00:08:00.734 }, 00:08:00.734 "method": "bdev_uring_create" 00:08:00.734 }, 00:08:00.734 { 00:08:00.734 "method": "bdev_wait_for_examine" 00:08:00.734 } 00:08:00.734 ] 00:08:00.734 } 00:08:00.734 ] 00:08:00.734 } 00:08:00.734 [2024-11-15 10:31:26.030318] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:00.734 [2024-11-15 10:31:26.030749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61273 ] 00:08:00.734 [2024-11-15 10:31:26.183360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.992 [2024-11-15 10:31:26.268920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.992 [2024-11-15 10:31:26.328352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.364  [2024-11-15T10:31:28.796Z] Copying: 162/512 [MB] (162 MBps) [2024-11-15T10:31:29.734Z] Copying: 331/512 [MB] (168 MBps) [2024-11-15T10:31:29.734Z] Copying: 497/512 [MB] (165 MBps) [2024-11-15T10:31:30.300Z] Copying: 512/512 [MB] (average 165 MBps) 00:08:04.802 00:08:04.802 10:31:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:04.802 10:31:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ lzgh2mmjlnqt4n5cnnakgtiqehiu68kuivyltsxmr485j3hhvfxr1tu5avsil9s6ypg1m5yyv6qwzunadqn9ybnsxu4z637mjz9816f3bvfc4pv3wkxwkc03986md2f1cbdx8ph1wcuq20hup0s88bci5aloorupizddkkae8up56l27wlk3aqp5onshu9mq998q0rv9l2hw8nve2ou6yoj2ox11li82upaxegrh3lrga5hk32su1j9nx1y3hisdbmh3ga6mws1sjshj0cabbyqmmz0hff3yohi4rbysv90za77wd6smn15f68cnw3vuipsgzg806e8p1nbrwgadrpjzgm5k69fpf7w41lf87ap6g83n7e44e5uxrdh5tlavl9qyzj7td18r6j9008s9oxzbn7qlre535yt3ld151t06ah8hzp43u9q2zjlsdkoci323o4ftkfvz9epf6cnruio897jrmoy6aqe4fn392ttcwcejxqwi3ljk0zwkenchmlzt19nqsfme1twx6g44pxaxwkoc0sq7z0ktsxu2tzq3bjde3oi7fu5apvphgdx6bp316ldjsc8i72tlyvv679vwev8v0urueux0sm600y3elrqbonb9lde4kfjiyhyxj49xb9ggs40dyvju58fvp4t6qbk12syd66qfu4dxa4on6evms0cw98w6xufvjgtktqw848dbuiqebi0fc0yvrpcfyvtygoorkg25tlodyjepql36ylw3r9svpatpu0a3ws2opi270c9hg7wxvgx1sctfqno5vlenfarkb0y76hc00e5lay3wz2mrlz84x8swwh4jiaikzhdvvbakezobw3omj4rne9ov6o6mdk22502gpqm2iqsna64ydw4vk0xfu3x5dnojc31dwf40vql006qdkovdmgmr6nj6nqmochjox1mhc8s3xrtfsjyz1cnxhkbuwct62z00onp5pq8dnooxf12jq6n3es5una4zbwbprtfqw9kep0kfli71gwnu == \l\z\g\h\2\m\m\j\l\n\q\t\4\n\5\c\n\n\a\k\g\t\i\q\e\h\i\u\6\8\k\u\i\v\y\l\t\s\x\m\r\4\8\5\j\3\h\h\v\f\x\r\1\t\u\5\a\v\s\i\l\9\s\6\y\p\g\1\m\5\y\y\v\6\q\w\z\u\n\a\d\q\n\9\y\b\n\s\x\u\4\z\6\3\7\m\j\z\9\8\1\6\f\3\b\v\f\c\4\p\v\3\w\k\x\w\k\c\0\3\9\8\6\m\d\2\f\1\c\b\d\x\8\p\h\1\w\c\u\q\2\0\h\u\p\0\s\8\8\b\c\i\5\a\l\o\o\r\u\p\i\z\d\d\k\k\a\e\8\u\p\5\6\l\2\7\w\l\k\3\a\q\p\5\o\n\s\h\u\9\m\q\9\9\8\q\0\r\v\9\l\2\h\w\8\n\v\e\2\o\u\6\y\o\j\2\o\x\1\1\l\i\8\2\u\p\a\x\e\g\r\h\3\l\r\g\a\5\h\k\3\2\s\u\1\j\9\n\x\1\y\3\h\i\s\d\b\m\h\3\g\a\6\m\w\s\1\s\j\s\h\j\0\c\a\b\b\y\q\m\m\z\0\h\f\f\3\y\o\h\i\4\r\b\y\s\v\9\0\z\a\7\7\w\d\6\s\m\n\1\5\f\6\8\c\n\w\3\v\u\i\p\s\g\z\g\8\0\6\e\8\p\1\n\b\r\w\g\a\d\r\p\j\z\g\m\5\k\6\9\f\p\f\7\w\4\1\l\f\8\7\a\p\6\g\8\3\n\7\e\4\4\e\5\u\x\r\d\h\5\t\l\a\v\l\9\q\y\z\j\7\t\d\1\8\r\6\j\9\0\0\8\s\9\o\x\z\b\n\7\q\l\r\e\5\3\5\y\t\3\l\d\1\5\1\t\0\6\a\h\8\h\z\p\4\3\u\9\q\2\z\j\l\s\d\k\o\c\i\3\2\3\o\4\f\t\k\f\v\z\9\e\p\f\6\c\n\r\u\i\o\8\9\7\j\r\m\o\y\6\a\q\e\4\f\n\3\9\2\t\t\c\w\c\e\j\x\q\w\i\3\l\j\k\0\z\w\k\e\n\c\h\m\l\z\t\1\9\n\q\s\f\m\e\1\t\w\x\6\g\4\4\p\x\a\x\w\k\o\c\0\s\q\7\z\0\k\t\s\x\u\2\t\z\q\3\b\j\d\e\3\o\i\7\f\u\5\a\p\v\p\h\g\d\x\6\b\p\3\1\6\l\d\j\s\c\8\i\7\2\t\l\y\v\v\6\7\9\v\w\e\v\8\v\0\u\r\u\e\u\x\0\s\m\6\0\0\y\3\e\l\r\q\b\o\n\b\9\l\d\e\4\k\f\j\i\y\h\y\x\j\4\9\x\b\9\g\g\s\4\0\d\y\v\j\u\5\8\f\v\p\4\t\6\q\b\k\1\2\s\y\d\6\6\q\f\u\4\d\x\a\4\o\n\6\e\v\m\s\0\c\w\9\8\w\6\x\u\f\v\j\g\t\k\t\q\w\8\4\8\d\b\u\i\q\e\b\i\0\f\c\0\y\v\r\p\c\f\y\v\t\y\g\o\o\r\k\g\2\5\t\l\o\d\y\j\e\p\q\l\3\6\y\l\w\3\r\9\s\v\p\a\t\p\u\0\a\3\w\s\2\o\p\i\2\7\0\c\9\h\g\7\w\x\v\g\x\1\s\c\t\f\q\n\o\5\v\l\e\n\f\a\r\k\b\0\y\7\6\h\c\0\0\e\5\l\a\y\3\w\z\2\m\r\l\z\8\4\x\8\s\w\w\h\4\j\i\a\i\k\z\h\d\v\v\b\a\k\e\z\o\b\w\3\o\m\j\4\r\n\e\9\o\v\6\o\6\m\d\k\2\2\5\0\2\g\p\q\m\2\i\q\s\n\a\6\4\y\d\w\4\v\k\0\x\f\u\3\x\5\d\n\o\j\c\3\1\d\w\f\4\0\v\q\l\0\0\6\q\d\k\o\v\d\m\g\m\r\6\n\j\6\n\q\m\o\c\h\j\o\x\1\m\h\c\8\s\3\x\r\t\f\s\j\y\z\1\c\n\x\h\k\b\u\w\c\t\6\2\z\0\0\o\n\p\5\p\q\8\d\n\o\o\x\f\1\2\j\q\6\n\3\e\s\5\u\n\a\4\z\b\w\b\p\r\t\f\q\w\9\k\e\p\0\k\f\l\i\7\1\g\w\n\u ]] 00:08:04.802 10:31:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:04.803 10:31:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ lzgh2mmjlnqt4n5cnnakgtiqehiu68kuivyltsxmr485j3hhvfxr1tu5avsil9s6ypg1m5yyv6qwzunadqn9ybnsxu4z637mjz9816f3bvfc4pv3wkxwkc03986md2f1cbdx8ph1wcuq20hup0s88bci5aloorupizddkkae8up56l27wlk3aqp5onshu9mq998q0rv9l2hw8nve2ou6yoj2ox11li82upaxegrh3lrga5hk32su1j9nx1y3hisdbmh3ga6mws1sjshj0cabbyqmmz0hff3yohi4rbysv90za77wd6smn15f68cnw3vuipsgzg806e8p1nbrwgadrpjzgm5k69fpf7w41lf87ap6g83n7e44e5uxrdh5tlavl9qyzj7td18r6j9008s9oxzbn7qlre535yt3ld151t06ah8hzp43u9q2zjlsdkoci323o4ftkfvz9epf6cnruio897jrmoy6aqe4fn392ttcwcejxqwi3ljk0zwkenchmlzt19nqsfme1twx6g44pxaxwkoc0sq7z0ktsxu2tzq3bjde3oi7fu5apvphgdx6bp316ldjsc8i72tlyvv679vwev8v0urueux0sm600y3elrqbonb9lde4kfjiyhyxj49xb9ggs40dyvju58fvp4t6qbk12syd66qfu4dxa4on6evms0cw98w6xufvjgtktqw848dbuiqebi0fc0yvrpcfyvtygoorkg25tlodyjepql36ylw3r9svpatpu0a3ws2opi270c9hg7wxvgx1sctfqno5vlenfarkb0y76hc00e5lay3wz2mrlz84x8swwh4jiaikzhdvvbakezobw3omj4rne9ov6o6mdk22502gpqm2iqsna64ydw4vk0xfu3x5dnojc31dwf40vql006qdkovdmgmr6nj6nqmochjox1mhc8s3xrtfsjyz1cnxhkbuwct62z00onp5pq8dnooxf12jq6n3es5una4zbwbprtfqw9kep0kfli71gwnu == \l\z\g\h\2\m\m\j\l\n\q\t\4\n\5\c\n\n\a\k\g\t\i\q\e\h\i\u\6\8\k\u\i\v\y\l\t\s\x\m\r\4\8\5\j\3\h\h\v\f\x\r\1\t\u\5\a\v\s\i\l\9\s\6\y\p\g\1\m\5\y\y\v\6\q\w\z\u\n\a\d\q\n\9\y\b\n\s\x\u\4\z\6\3\7\m\j\z\9\8\1\6\f\3\b\v\f\c\4\p\v\3\w\k\x\w\k\c\0\3\9\8\6\m\d\2\f\1\c\b\d\x\8\p\h\1\w\c\u\q\2\0\h\u\p\0\s\8\8\b\c\i\5\a\l\o\o\r\u\p\i\z\d\d\k\k\a\e\8\u\p\5\6\l\2\7\w\l\k\3\a\q\p\5\o\n\s\h\u\9\m\q\9\9\8\q\0\r\v\9\l\2\h\w\8\n\v\e\2\o\u\6\y\o\j\2\o\x\1\1\l\i\8\2\u\p\a\x\e\g\r\h\3\l\r\g\a\5\h\k\3\2\s\u\1\j\9\n\x\1\y\3\h\i\s\d\b\m\h\3\g\a\6\m\w\s\1\s\j\s\h\j\0\c\a\b\b\y\q\m\m\z\0\h\f\f\3\y\o\h\i\4\r\b\y\s\v\9\0\z\a\7\7\w\d\6\s\m\n\1\5\f\6\8\c\n\w\3\v\u\i\p\s\g\z\g\8\0\6\e\8\p\1\n\b\r\w\g\a\d\r\p\j\z\g\m\5\k\6\9\f\p\f\7\w\4\1\l\f\8\7\a\p\6\g\8\3\n\7\e\4\4\e\5\u\x\r\d\h\5\t\l\a\v\l\9\q\y\z\j\7\t\d\1\8\r\6\j\9\0\0\8\s\9\o\x\z\b\n\7\q\l\r\e\5\3\5\y\t\3\l\d\1\5\1\t\0\6\a\h\8\h\z\p\4\3\u\9\q\2\z\j\l\s\d\k\o\c\i\3\2\3\o\4\f\t\k\f\v\z\9\e\p\f\6\c\n\r\u\i\o\8\9\7\j\r\m\o\y\6\a\q\e\4\f\n\3\9\2\t\t\c\w\c\e\j\x\q\w\i\3\l\j\k\0\z\w\k\e\n\c\h\m\l\z\t\1\9\n\q\s\f\m\e\1\t\w\x\6\g\4\4\p\x\a\x\w\k\o\c\0\s\q\7\z\0\k\t\s\x\u\2\t\z\q\3\b\j\d\e\3\o\i\7\f\u\5\a\p\v\p\h\g\d\x\6\b\p\3\1\6\l\d\j\s\c\8\i\7\2\t\l\y\v\v\6\7\9\v\w\e\v\8\v\0\u\r\u\e\u\x\0\s\m\6\0\0\y\3\e\l\r\q\b\o\n\b\9\l\d\e\4\k\f\j\i\y\h\y\x\j\4\9\x\b\9\g\g\s\4\0\d\y\v\j\u\5\8\f\v\p\4\t\6\q\b\k\1\2\s\y\d\6\6\q\f\u\4\d\x\a\4\o\n\6\e\v\m\s\0\c\w\9\8\w\6\x\u\f\v\j\g\t\k\t\q\w\8\4\8\d\b\u\i\q\e\b\i\0\f\c\0\y\v\r\p\c\f\y\v\t\y\g\o\o\r\k\g\2\5\t\l\o\d\y\j\e\p\q\l\3\6\y\l\w\3\r\9\s\v\p\a\t\p\u\0\a\3\w\s\2\o\p\i\2\7\0\c\9\h\g\7\w\x\v\g\x\1\s\c\t\f\q\n\o\5\v\l\e\n\f\a\r\k\b\0\y\7\6\h\c\0\0\e\5\l\a\y\3\w\z\2\m\r\l\z\8\4\x\8\s\w\w\h\4\j\i\a\i\k\z\h\d\v\v\b\a\k\e\z\o\b\w\3\o\m\j\4\r\n\e\9\o\v\6\o\6\m\d\k\2\2\5\0\2\g\p\q\m\2\i\q\s\n\a\6\4\y\d\w\4\v\k\0\x\f\u\3\x\5\d\n\o\j\c\3\1\d\w\f\4\0\v\q\l\0\0\6\q\d\k\o\v\d\m\g\m\r\6\n\j\6\n\q\m\o\c\h\j\o\x\1\m\h\c\8\s\3\x\r\t\f\s\j\y\z\1\c\n\x\h\k\b\u\w\c\t\6\2\z\0\0\o\n\p\5\p\q\8\d\n\o\o\x\f\1\2\j\q\6\n\3\e\s\5\u\n\a\4\z\b\w\b\p\r\t\f\q\w\9\k\e\p\0\k\f\l\i\7\1\g\w\n\u ]] 00:08:04.803 10:31:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:05.061 10:31:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:05.061 10:31:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:05.061 10:31:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:05.061 10:31:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:05.061 [2024-11-15 10:31:30.513795] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:05.061 [2024-11-15 10:31:30.513952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61347 ] 00:08:05.061 { 00:08:05.061 "subsystems": [ 00:08:05.061 { 00:08:05.061 "subsystem": "bdev", 00:08:05.061 "config": [ 00:08:05.061 { 00:08:05.061 "params": { 00:08:05.061 "block_size": 512, 00:08:05.061 "num_blocks": 1048576, 00:08:05.061 "name": "malloc0" 00:08:05.061 }, 00:08:05.061 "method": "bdev_malloc_create" 00:08:05.061 }, 00:08:05.061 { 00:08:05.061 "params": { 00:08:05.061 "filename": "/dev/zram1", 00:08:05.061 "name": "uring0" 00:08:05.061 }, 00:08:05.061 "method": "bdev_uring_create" 00:08:05.061 }, 00:08:05.061 { 00:08:05.061 "method": "bdev_wait_for_examine" 00:08:05.061 } 00:08:05.061 ] 00:08:05.061 } 00:08:05.061 ] 00:08:05.061 } 00:08:05.320 [2024-11-15 10:31:30.663455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.320 [2024-11-15 10:31:30.748100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.320 [2024-11-15 10:31:30.805232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.693  [2024-11-15T10:31:33.142Z] Copying: 150/512 [MB] (150 MBps) [2024-11-15T10:31:34.078Z] Copying: 300/512 [MB] (149 MBps) [2024-11-15T10:31:34.644Z] Copying: 450/512 [MB] (149 MBps) [2024-11-15T10:31:34.911Z] Copying: 512/512 [MB] (average 150 MBps) 00:08:09.413 00:08:09.413 10:31:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:09.413 10:31:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:09.413 10:31:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:09.413 10:31:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:09.413 10:31:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:09.413 10:31:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:09.413 10:31:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:09.413 10:31:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:09.413 [2024-11-15 10:31:34.879700] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:09.413 [2024-11-15 10:31:34.879814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61411 ] 00:08:09.413 { 00:08:09.413 "subsystems": [ 00:08:09.413 { 00:08:09.413 "subsystem": "bdev", 00:08:09.413 "config": [ 00:08:09.413 { 00:08:09.413 "params": { 00:08:09.414 "block_size": 512, 00:08:09.414 "num_blocks": 1048576, 00:08:09.414 "name": "malloc0" 00:08:09.414 }, 00:08:09.414 "method": "bdev_malloc_create" 00:08:09.414 }, 00:08:09.414 { 00:08:09.414 "params": { 00:08:09.414 "filename": "/dev/zram1", 00:08:09.414 "name": "uring0" 00:08:09.414 }, 00:08:09.414 "method": "bdev_uring_create" 00:08:09.414 }, 00:08:09.414 { 00:08:09.414 "params": { 00:08:09.414 "name": "uring0" 00:08:09.414 }, 00:08:09.414 "method": "bdev_uring_delete" 00:08:09.414 }, 00:08:09.414 { 00:08:09.414 "method": "bdev_wait_for_examine" 00:08:09.414 } 00:08:09.414 ] 00:08:09.414 } 00:08:09.414 ] 00:08:09.414 } 00:08:09.677 [2024-11-15 10:31:35.025363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.677 [2024-11-15 10:31:35.111076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.677 [2024-11-15 10:31:35.170411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.937  [2024-11-15T10:31:36.002Z] Copying: 0/0 [B] (average 0 Bps) 00:08:10.504 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.504 10:31:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:10.504 [2024-11-15 10:31:35.843334] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:10.504 [2024-11-15 10:31:35.843471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61440 ] 00:08:10.504 { 00:08:10.504 "subsystems": [ 00:08:10.504 { 00:08:10.504 "subsystem": "bdev", 00:08:10.504 "config": [ 00:08:10.504 { 00:08:10.504 "params": { 00:08:10.504 "block_size": 512, 00:08:10.504 "num_blocks": 1048576, 00:08:10.504 "name": "malloc0" 00:08:10.504 }, 00:08:10.504 "method": "bdev_malloc_create" 00:08:10.504 }, 00:08:10.504 { 00:08:10.504 "params": { 00:08:10.504 "filename": "/dev/zram1", 00:08:10.504 "name": "uring0" 00:08:10.504 }, 00:08:10.504 "method": "bdev_uring_create" 00:08:10.504 }, 00:08:10.504 { 00:08:10.504 "params": { 00:08:10.504 "name": "uring0" 00:08:10.504 }, 00:08:10.504 "method": "bdev_uring_delete" 00:08:10.504 }, 00:08:10.504 { 00:08:10.504 "method": "bdev_wait_for_examine" 00:08:10.504 } 00:08:10.504 ] 00:08:10.504 } 00:08:10.504 ] 00:08:10.504 } 00:08:10.504 [2024-11-15 10:31:35.993540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.762 [2024-11-15 10:31:36.058080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.763 [2024-11-15 10:31:36.112130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.021 [2024-11-15 10:31:36.321358] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:11.021 [2024-11-15 10:31:36.321424] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:11.021 [2024-11-15 10:31:36.321437] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:11.021 [2024-11-15 10:31:36.321449] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.280 [2024-11-15 10:31:36.635592] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:11.280 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:08:11.280 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.280 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:08:11.280 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:08:11.280 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:08:11.280 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.280 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:11.280 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:11.280 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:11.280 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:11.280 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:11.280 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:11.538 00:08:11.538 real 0m15.652s 00:08:11.538 user 0m10.644s 00:08:11.538 sys 0m13.158s 00:08:11.538 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.538 ************************************ 00:08:11.538 END TEST dd_uring_copy 00:08:11.538 ************************************ 00:08:11.538 10:31:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:11.538 ************************************ 00:08:11.538 END TEST spdk_dd_uring 00:08:11.538 ************************************ 00:08:11.538 00:08:11.538 real 0m15.916s 00:08:11.538 user 0m10.802s 00:08:11.538 sys 0m13.266s 00:08:11.538 10:31:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.538 10:31:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:11.796 10:31:37 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:11.796 10:31:37 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:11.796 10:31:37 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:11.796 10:31:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:11.796 ************************************ 00:08:11.796 START TEST spdk_dd_sparse 00:08:11.796 ************************************ 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:11.796 * Looking for test storage... 00:08:11.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.796 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:11.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.797 --rc genhtml_branch_coverage=1 00:08:11.797 --rc genhtml_function_coverage=1 00:08:11.797 --rc genhtml_legend=1 00:08:11.797 --rc geninfo_all_blocks=1 00:08:11.797 --rc geninfo_unexecuted_blocks=1 00:08:11.797 00:08:11.797 ' 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:11.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.797 --rc genhtml_branch_coverage=1 00:08:11.797 --rc genhtml_function_coverage=1 00:08:11.797 --rc genhtml_legend=1 00:08:11.797 --rc geninfo_all_blocks=1 00:08:11.797 --rc geninfo_unexecuted_blocks=1 00:08:11.797 00:08:11.797 ' 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:11.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.797 --rc genhtml_branch_coverage=1 00:08:11.797 --rc genhtml_function_coverage=1 00:08:11.797 --rc genhtml_legend=1 00:08:11.797 --rc geninfo_all_blocks=1 00:08:11.797 --rc geninfo_unexecuted_blocks=1 00:08:11.797 00:08:11.797 ' 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:11.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.797 --rc genhtml_branch_coverage=1 00:08:11.797 --rc genhtml_function_coverage=1 00:08:11.797 --rc genhtml_legend=1 00:08:11.797 --rc geninfo_all_blocks=1 00:08:11.797 --rc geninfo_unexecuted_blocks=1 00:08:11.797 00:08:11.797 ' 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:11.797 1+0 records in 00:08:11.797 1+0 records out 00:08:11.797 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00625742 s, 670 MB/s 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:11.797 1+0 records in 00:08:11.797 1+0 records out 00:08:11.797 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00523703 s, 801 MB/s 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:11.797 1+0 records in 00:08:11.797 1+0 records out 00:08:11.797 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00633087 s, 663 MB/s 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:11.797 10:31:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:12.055 ************************************ 00:08:12.055 START TEST dd_sparse_file_to_file 00:08:12.055 ************************************ 00:08:12.055 10:31:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1127 -- # file_to_file 00:08:12.055 10:31:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:12.055 10:31:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:12.055 10:31:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:12.055 10:31:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:12.055 10:31:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:12.055 10:31:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:12.055 10:31:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:12.055 10:31:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:12.055 10:31:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:12.055 10:31:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:12.055 { 00:08:12.055 "subsystems": [ 00:08:12.055 { 00:08:12.055 "subsystem": "bdev", 00:08:12.055 "config": [ 00:08:12.055 { 00:08:12.055 "params": { 00:08:12.055 "block_size": 4096, 00:08:12.055 "filename": "dd_sparse_aio_disk", 00:08:12.055 "name": "dd_aio" 00:08:12.055 }, 00:08:12.055 "method": "bdev_aio_create" 00:08:12.055 }, 00:08:12.055 { 00:08:12.055 "params": { 00:08:12.055 "lvs_name": "dd_lvstore", 00:08:12.055 "bdev_name": "dd_aio" 00:08:12.055 }, 00:08:12.055 "method": "bdev_lvol_create_lvstore" 00:08:12.055 }, 00:08:12.055 { 00:08:12.055 "method": "bdev_wait_for_examine" 00:08:12.055 } 00:08:12.055 ] 00:08:12.055 } 00:08:12.055 ] 00:08:12.055 } 00:08:12.055 [2024-11-15 10:31:37.350721] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:12.055 [2024-11-15 10:31:37.350902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61534 ] 00:08:12.055 [2024-11-15 10:31:37.504972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.314 [2024-11-15 10:31:37.575327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.314 [2024-11-15 10:31:37.633016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.314  [2024-11-15T10:31:38.070Z] Copying: 12/36 [MB] (average 923 MBps) 00:08:12.572 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:12.572 00:08:12.572 real 0m0.738s 00:08:12.572 user 0m0.462s 00:08:12.572 sys 0m0.361s 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:12.572 ************************************ 00:08:12.572 END TEST dd_sparse_file_to_file 00:08:12.572 ************************************ 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.572 10:31:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:12.830 ************************************ 00:08:12.830 START TEST dd_sparse_file_to_bdev 00:08:12.830 ************************************ 00:08:12.830 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1127 -- # file_to_bdev 00:08:12.830 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:12.830 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:12.830 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:12.830 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:12.830 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:12.830 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:12.830 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:12.830 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:12.830 { 00:08:12.830 "subsystems": [ 00:08:12.830 { 00:08:12.830 "subsystem": "bdev", 00:08:12.830 "config": [ 00:08:12.830 { 00:08:12.830 "params": { 00:08:12.830 "block_size": 4096, 00:08:12.830 "filename": "dd_sparse_aio_disk", 00:08:12.830 "name": "dd_aio" 00:08:12.830 }, 00:08:12.830 "method": "bdev_aio_create" 00:08:12.830 }, 00:08:12.830 { 00:08:12.830 "params": { 00:08:12.830 "lvs_name": "dd_lvstore", 00:08:12.830 "lvol_name": "dd_lvol", 00:08:12.830 "size_in_mib": 36, 00:08:12.830 "thin_provision": true 00:08:12.830 }, 00:08:12.830 "method": "bdev_lvol_create" 00:08:12.830 }, 00:08:12.830 { 00:08:12.830 "method": "bdev_wait_for_examine" 00:08:12.830 } 00:08:12.830 ] 00:08:12.830 } 00:08:12.830 ] 00:08:12.830 } 00:08:12.830 [2024-11-15 10:31:38.157700] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:12.830 [2024-11-15 10:31:38.157865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61582 ] 00:08:12.830 [2024-11-15 10:31:38.314271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.088 [2024-11-15 10:31:38.382922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.088 [2024-11-15 10:31:38.437266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.088  [2024-11-15T10:31:38.844Z] Copying: 12/36 [MB] (average 571 MBps) 00:08:13.346 00:08:13.346 00:08:13.346 real 0m0.681s 00:08:13.346 user 0m0.436s 00:08:13.346 sys 0m0.354s 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.346 ************************************ 00:08:13.346 END TEST dd_sparse_file_to_bdev 00:08:13.346 ************************************ 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:13.346 ************************************ 00:08:13.346 START TEST dd_sparse_bdev_to_file 00:08:13.346 ************************************ 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1127 -- # bdev_to_file 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:13.346 10:31:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:13.605 { 00:08:13.605 "subsystems": [ 00:08:13.605 { 00:08:13.605 "subsystem": "bdev", 00:08:13.605 "config": [ 00:08:13.605 { 00:08:13.605 "params": { 00:08:13.605 "block_size": 4096, 00:08:13.605 "filename": "dd_sparse_aio_disk", 00:08:13.605 "name": "dd_aio" 00:08:13.605 }, 00:08:13.605 "method": "bdev_aio_create" 00:08:13.605 }, 00:08:13.605 { 00:08:13.605 "method": "bdev_wait_for_examine" 00:08:13.605 } 00:08:13.605 ] 00:08:13.605 } 00:08:13.605 ] 00:08:13.605 } 00:08:13.605 [2024-11-15 10:31:38.864764] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:13.605 [2024-11-15 10:31:38.864903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61619 ] 00:08:13.605 [2024-11-15 10:31:39.014340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.605 [2024-11-15 10:31:39.097327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.862 [2024-11-15 10:31:39.151655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.862  [2024-11-15T10:31:39.618Z] Copying: 12/36 [MB] (average 750 MBps) 00:08:14.120 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:14.120 ************************************ 00:08:14.120 END TEST dd_sparse_bdev_to_file 00:08:14.120 ************************************ 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:14.120 00:08:14.120 real 0m0.689s 00:08:14.120 user 0m0.446s 00:08:14.120 sys 0m0.372s 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:14.120 ************************************ 00:08:14.120 END TEST spdk_dd_sparse 00:08:14.120 ************************************ 00:08:14.120 00:08:14.120 real 0m2.483s 00:08:14.120 user 0m1.522s 00:08:14.120 sys 0m1.276s 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.120 10:31:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:14.120 10:31:39 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:14.120 10:31:39 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:14.120 10:31:39 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.120 10:31:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:14.120 ************************************ 00:08:14.120 START TEST spdk_dd_negative 00:08:14.120 ************************************ 00:08:14.120 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:14.379 * Looking for test storage... 00:08:14.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:14.379 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:14.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.380 --rc genhtml_branch_coverage=1 00:08:14.380 --rc genhtml_function_coverage=1 00:08:14.380 --rc genhtml_legend=1 00:08:14.380 --rc geninfo_all_blocks=1 00:08:14.380 --rc geninfo_unexecuted_blocks=1 00:08:14.380 00:08:14.380 ' 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:14.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.380 --rc genhtml_branch_coverage=1 00:08:14.380 --rc genhtml_function_coverage=1 00:08:14.380 --rc genhtml_legend=1 00:08:14.380 --rc geninfo_all_blocks=1 00:08:14.380 --rc geninfo_unexecuted_blocks=1 00:08:14.380 00:08:14.380 ' 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:14.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.380 --rc genhtml_branch_coverage=1 00:08:14.380 --rc genhtml_function_coverage=1 00:08:14.380 --rc genhtml_legend=1 00:08:14.380 --rc geninfo_all_blocks=1 00:08:14.380 --rc geninfo_unexecuted_blocks=1 00:08:14.380 00:08:14.380 ' 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:14.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.380 --rc genhtml_branch_coverage=1 00:08:14.380 --rc genhtml_function_coverage=1 00:08:14.380 --rc genhtml_legend=1 00:08:14.380 --rc geninfo_all_blocks=1 00:08:14.380 --rc geninfo_unexecuted_blocks=1 00:08:14.380 00:08:14.380 ' 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:14.380 ************************************ 00:08:14.380 START TEST dd_invalid_arguments 00:08:14.380 ************************************ 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1127 -- # invalid_arguments 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:14.380 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:14.380 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:14.380 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:14.380 00:08:14.380 CPU options: 00:08:14.380 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:14.380 (like [0,1,10]) 00:08:14.380 --lcores lcore to CPU mapping list. The list is in the format: 00:08:14.380 [<,lcores[@CPUs]>...] 00:08:14.380 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:14.380 Within the group, '-' is used for range separator, 00:08:14.380 ',' is used for single number separator. 00:08:14.380 '( )' can be omitted for single element group, 00:08:14.380 '@' can be omitted if cpus and lcores have the same value 00:08:14.380 --disable-cpumask-locks Disable CPU core lock files. 00:08:14.380 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:14.380 pollers in the app support interrupt mode) 00:08:14.380 -p, --main-core main (primary) core for DPDK 00:08:14.380 00:08:14.380 Configuration options: 00:08:14.380 -c, --config, --json JSON config file 00:08:14.380 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:14.380 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:14.380 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:14.380 --rpcs-allowed comma-separated list of permitted RPCS 00:08:14.380 --json-ignore-init-errors don't exit on invalid config entry 00:08:14.380 00:08:14.380 Memory options: 00:08:14.380 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:14.380 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:14.380 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:14.380 -R, --huge-unlink unlink huge files after initialization 00:08:14.380 -n, --mem-channels number of memory channels used for DPDK 00:08:14.380 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:14.380 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:14.380 --no-huge run without using hugepages 00:08:14.380 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:14.380 -i, --shm-id shared memory ID (optional) 00:08:14.380 -g, --single-file-segments force creating just one hugetlbfs file 00:08:14.380 00:08:14.380 PCI options: 00:08:14.380 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:14.380 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:14.380 -u, --no-pci disable PCI access 00:08:14.380 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:14.380 00:08:14.380 Log options: 00:08:14.381 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:14.381 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:14.381 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:14.381 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:14.381 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:14.381 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:14.381 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:14.381 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:14.381 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:14.381 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:14.381 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:14.381 --silence-noticelog disable notice level logging to stderr 00:08:14.381 00:08:14.381 Trace options: 00:08:14.381 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:14.381 setting 0 to disable trace (default 32768) 00:08:14.381 Tracepoints vary in size and can use more than one trace entry. 00:08:14.381 -e, --tpoint-group [:] 00:08:14.381 [2024-11-15 10:31:39.825075] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:14.381 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:14.381 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:14.381 bdev_raid, scheduler, all). 00:08:14.381 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:14.381 a tracepoint group. First tpoint inside a group can be enabled by 00:08:14.381 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:14.381 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:14.381 in /include/spdk_internal/trace_defs.h 00:08:14.381 00:08:14.381 Other options: 00:08:14.381 -h, --help show this usage 00:08:14.381 -v, --version print SPDK version 00:08:14.381 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:14.381 --env-context Opaque context for use of the env implementation 00:08:14.381 00:08:14.381 Application specific: 00:08:14.381 [--------- DD Options ---------] 00:08:14.381 --if Input file. Must specify either --if or --ib. 00:08:14.381 --ib Input bdev. Must specifier either --if or --ib 00:08:14.381 --of Output file. Must specify either --of or --ob. 00:08:14.381 --ob Output bdev. Must specify either --of or --ob. 00:08:14.381 --iflag Input file flags. 00:08:14.381 --oflag Output file flags. 00:08:14.381 --bs I/O unit size (default: 4096) 00:08:14.381 --qd Queue depth (default: 2) 00:08:14.381 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:14.381 --skip Skip this many I/O units at start of input. (default: 0) 00:08:14.381 --seek Skip this many I/O units at start of output. (default: 0) 00:08:14.381 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:14.381 --sparse Enable hole skipping in input target 00:08:14.381 Available iflag and oflag values: 00:08:14.381 append - append mode 00:08:14.381 direct - use direct I/O for data 00:08:14.381 directory - fail unless a directory 00:08:14.381 dsync - use synchronized I/O for data 00:08:14.381 noatime - do not update access time 00:08:14.381 noctty - do not assign controlling terminal from file 00:08:14.381 nofollow - do not follow symlinks 00:08:14.381 nonblock - use non-blocking I/O 00:08:14.381 sync - use synchronized I/O for data and metadata 00:08:14.381 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:08:14.381 ************************************ 00:08:14.381 END TEST dd_invalid_arguments 00:08:14.381 ************************************ 00:08:14.381 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.381 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.381 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.381 00:08:14.381 real 0m0.095s 00:08:14.381 user 0m0.058s 00:08:14.381 sys 0m0.034s 00:08:14.381 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.381 10:31:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:14.640 ************************************ 00:08:14.640 START TEST dd_double_input 00:08:14.640 ************************************ 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1127 -- # double_input 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:14.640 [2024-11-15 10:31:39.948589] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.640 ************************************ 00:08:14.640 END TEST dd_double_input 00:08:14.640 ************************************ 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.640 00:08:14.640 real 0m0.069s 00:08:14.640 user 0m0.045s 00:08:14.640 sys 0m0.023s 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.640 10:31:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:14.640 ************************************ 00:08:14.640 START TEST dd_double_output 00:08:14.640 ************************************ 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1127 -- # double_output 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:14.640 [2024-11-15 10:31:40.075488] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.640 00:08:14.640 real 0m0.094s 00:08:14.640 user 0m0.060s 00:08:14.640 sys 0m0.033s 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.640 ************************************ 00:08:14.640 END TEST dd_double_output 00:08:14.640 ************************************ 00:08:14.640 10:31:40 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:14.899 ************************************ 00:08:14.899 START TEST dd_no_input 00:08:14.899 ************************************ 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1127 -- # no_input 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:14.899 [2024-11-15 10:31:40.207613] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.899 00:08:14.899 real 0m0.089s 00:08:14.899 user 0m0.053s 00:08:14.899 sys 0m0.034s 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.899 ************************************ 00:08:14.899 END TEST dd_no_input 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:14.899 ************************************ 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:14.899 ************************************ 00:08:14.899 START TEST dd_no_output 00:08:14.899 ************************************ 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1127 -- # no_output 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.899 [2024-11-15 10:31:40.339343] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.899 00:08:14.899 real 0m0.078s 00:08:14.899 user 0m0.052s 00:08:14.899 sys 0m0.024s 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.899 10:31:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:14.899 ************************************ 00:08:14.899 END TEST dd_no_output 00:08:14.899 ************************************ 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:15.157 ************************************ 00:08:15.157 START TEST dd_wrong_blocksize 00:08:15.157 ************************************ 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1127 -- # wrong_blocksize 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.157 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:15.158 [2024-11-15 10:31:40.460131] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:15.158 00:08:15.158 real 0m0.070s 00:08:15.158 user 0m0.040s 00:08:15.158 sys 0m0.029s 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:15.158 ************************************ 00:08:15.158 END TEST dd_wrong_blocksize 00:08:15.158 ************************************ 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:15.158 ************************************ 00:08:15.158 START TEST dd_smaller_blocksize 00:08:15.158 ************************************ 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1127 -- # smaller_blocksize 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:15.158 10:31:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:15.158 [2024-11-15 10:31:40.598098] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:15.158 [2024-11-15 10:31:40.598237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61841 ] 00:08:15.416 [2024-11-15 10:31:40.744259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.416 [2024-11-15 10:31:40.829314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.416 [2024-11-15 10:31:40.908038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.981 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:16.239 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:16.239 [2024-11-15 10:31:41.653482] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:16.239 [2024-11-15 10:31:41.653600] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:16.497 [2024-11-15 10:31:41.844606] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:16.497 10:31:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:08:16.497 10:31:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:16.497 10:31:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:08:16.497 10:31:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:08:16.497 10:31:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:08:16.497 10:31:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:16.497 00:08:16.497 real 0m1.402s 00:08:16.497 user 0m0.535s 00:08:16.497 sys 0m0.756s 00:08:16.497 10:31:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.497 ************************************ 00:08:16.498 END TEST dd_smaller_blocksize 00:08:16.498 ************************************ 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:16.498 ************************************ 00:08:16.498 START TEST dd_invalid_count 00:08:16.498 ************************************ 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1127 -- # invalid_count 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:16.498 10:31:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:16.757 [2024-11-15 10:31:42.046189] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:16.757 00:08:16.757 real 0m0.094s 00:08:16.757 user 0m0.059s 00:08:16.757 sys 0m0.034s 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:16.757 ************************************ 00:08:16.757 END TEST dd_invalid_count 00:08:16.757 ************************************ 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:16.757 ************************************ 00:08:16.757 START TEST dd_invalid_oflag 00:08:16.757 ************************************ 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1127 -- # invalid_oflag 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:16.757 [2024-11-15 10:31:42.176255] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:16.757 00:08:16.757 real 0m0.071s 00:08:16.757 user 0m0.043s 00:08:16.757 sys 0m0.027s 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.757 ************************************ 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:16.757 END TEST dd_invalid_oflag 00:08:16.757 ************************************ 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:16.757 ************************************ 00:08:16.757 START TEST dd_invalid_iflag 00:08:16.757 ************************************ 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1127 -- # invalid_iflag 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:16.757 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:17.016 [2024-11-15 10:31:42.313065] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:17.016 00:08:17.016 real 0m0.099s 00:08:17.016 user 0m0.061s 00:08:17.016 sys 0m0.036s 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:17.016 ************************************ 00:08:17.016 END TEST dd_invalid_iflag 00:08:17.016 ************************************ 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:17.016 ************************************ 00:08:17.016 START TEST dd_unknown_flag 00:08:17.016 ************************************ 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1127 -- # unknown_flag 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.016 10:31:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:17.016 [2024-11-15 10:31:42.442094] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:17.016 [2024-11-15 10:31:42.442187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61944 ] 00:08:17.274 [2024-11-15 10:31:42.592791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.274 [2024-11-15 10:31:42.674210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.274 [2024-11-15 10:31:42.759359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.532 [2024-11-15 10:31:42.820546] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:17.532 [2024-11-15 10:31:42.820624] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.532 [2024-11-15 10:31:42.820695] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:17.532 [2024-11-15 10:31:42.820710] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.532 [2024-11-15 10:31:42.821018] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:17.532 [2024-11-15 10:31:42.821046] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.532 [2024-11-15 10:31:42.821113] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:17.532 [2024-11-15 10:31:42.821124] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:17.532 [2024-11-15 10:31:43.017684] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:17.790 00:08:17.790 real 0m0.718s 00:08:17.790 user 0m0.421s 00:08:17.790 sys 0m0.198s 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:17.790 ************************************ 00:08:17.790 END TEST dd_unknown_flag 00:08:17.790 ************************************ 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:17.790 ************************************ 00:08:17.790 START TEST dd_invalid_json 00:08:17.790 ************************************ 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1127 -- # invalid_json 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.790 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:17.790 [2024-11-15 10:31:43.231553] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:17.790 [2024-11-15 10:31:43.231696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61978 ] 00:08:18.048 [2024-11-15 10:31:43.385141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.048 [2024-11-15 10:31:43.465535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.048 [2024-11-15 10:31:43.465635] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:18.048 [2024-11-15 10:31:43.465654] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:18.048 [2024-11-15 10:31:43.465665] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.048 [2024-11-15 10:31:43.465707] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.307 00:08:18.307 real 0m0.397s 00:08:18.307 user 0m0.207s 00:08:18.307 sys 0m0.088s 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:18.307 ************************************ 00:08:18.307 END TEST dd_invalid_json 00:08:18.307 ************************************ 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:18.307 ************************************ 00:08:18.307 START TEST dd_invalid_seek 00:08:18.307 ************************************ 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1127 -- # invalid_seek 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.307 10:31:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:18.307 [2024-11-15 10:31:43.659240] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:18.307 [2024-11-15 10:31:43.659345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62002 ] 00:08:18.307 { 00:08:18.307 "subsystems": [ 00:08:18.307 { 00:08:18.307 "subsystem": "bdev", 00:08:18.307 "config": [ 00:08:18.307 { 00:08:18.307 "params": { 00:08:18.307 "block_size": 512, 00:08:18.307 "num_blocks": 512, 00:08:18.307 "name": "malloc0" 00:08:18.307 }, 00:08:18.307 "method": "bdev_malloc_create" 00:08:18.307 }, 00:08:18.307 { 00:08:18.307 "params": { 00:08:18.307 "block_size": 512, 00:08:18.307 "num_blocks": 512, 00:08:18.307 "name": "malloc1" 00:08:18.307 }, 00:08:18.307 "method": "bdev_malloc_create" 00:08:18.307 }, 00:08:18.307 { 00:08:18.307 "method": "bdev_wait_for_examine" 00:08:18.307 } 00:08:18.307 ] 00:08:18.307 } 00:08:18.307 ] 00:08:18.307 } 00:08:18.565 [2024-11-15 10:31:43.807068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.565 [2024-11-15 10:31:43.920900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.565 [2024-11-15 10:31:44.012583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.824 [2024-11-15 10:31:44.093658] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:18.824 [2024-11-15 10:31:44.093727] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.824 [2024-11-15 10:31:44.281852] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:19.156 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:08:19.156 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.156 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.157 00:08:19.157 real 0m0.767s 00:08:19.157 user 0m0.507s 00:08:19.157 sys 0m0.212s 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:19.157 ************************************ 00:08:19.157 END TEST dd_invalid_seek 00:08:19.157 ************************************ 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:19.157 ************************************ 00:08:19.157 START TEST dd_invalid_skip 00:08:19.157 ************************************ 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1127 -- # invalid_skip 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.157 10:31:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:19.157 { 00:08:19.157 "subsystems": [ 00:08:19.157 { 00:08:19.157 "subsystem": "bdev", 00:08:19.157 "config": [ 00:08:19.157 { 00:08:19.157 "params": { 00:08:19.157 "block_size": 512, 00:08:19.157 "num_blocks": 512, 00:08:19.157 "name": "malloc0" 00:08:19.157 }, 00:08:19.157 "method": "bdev_malloc_create" 00:08:19.157 }, 00:08:19.157 { 00:08:19.157 "params": { 00:08:19.157 "block_size": 512, 00:08:19.157 "num_blocks": 512, 00:08:19.157 "name": "malloc1" 00:08:19.157 }, 00:08:19.157 "method": "bdev_malloc_create" 00:08:19.157 }, 00:08:19.157 { 00:08:19.157 "method": "bdev_wait_for_examine" 00:08:19.157 } 00:08:19.157 ] 00:08:19.157 } 00:08:19.157 ] 00:08:19.157 } 00:08:19.157 [2024-11-15 10:31:44.493241] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:19.157 [2024-11-15 10:31:44.493370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62041 ] 00:08:19.157 [2024-11-15 10:31:44.648016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.414 [2024-11-15 10:31:44.725807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.414 [2024-11-15 10:31:44.805338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.414 [2024-11-15 10:31:44.886040] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:19.414 [2024-11-15 10:31:44.886117] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.671 [2024-11-15 10:31:45.070215] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.930 00:08:19.930 real 0m0.759s 00:08:19.930 user 0m0.501s 00:08:19.930 sys 0m0.219s 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:19.930 ************************************ 00:08:19.930 END TEST dd_invalid_skip 00:08:19.930 ************************************ 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:19.930 ************************************ 00:08:19.930 START TEST dd_invalid_input_count 00:08:19.930 ************************************ 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1127 -- # invalid_input_count 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.930 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:19.930 { 00:08:19.930 "subsystems": [ 00:08:19.930 { 00:08:19.930 "subsystem": "bdev", 00:08:19.930 "config": [ 00:08:19.930 { 00:08:19.930 "params": { 00:08:19.930 "block_size": 512, 00:08:19.930 "num_blocks": 512, 00:08:19.930 "name": "malloc0" 00:08:19.930 }, 00:08:19.930 "method": "bdev_malloc_create" 00:08:19.930 }, 00:08:19.930 { 00:08:19.930 "params": { 00:08:19.930 "block_size": 512, 00:08:19.930 "num_blocks": 512, 00:08:19.930 "name": "malloc1" 00:08:19.930 }, 00:08:19.930 "method": "bdev_malloc_create" 00:08:19.930 }, 00:08:19.930 { 00:08:19.930 "method": "bdev_wait_for_examine" 00:08:19.930 } 00:08:19.930 ] 00:08:19.930 } 00:08:19.930 ] 00:08:19.930 } 00:08:19.930 [2024-11-15 10:31:45.289659] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:19.930 [2024-11-15 10:31:45.289767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62081 ] 00:08:20.189 [2024-11-15 10:31:45.438407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.189 [2024-11-15 10:31:45.518439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.189 [2024-11-15 10:31:45.594132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.189 [2024-11-15 10:31:45.674172] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:20.189 [2024-11-15 10:31:45.674236] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:20.447 [2024-11-15 10:31:45.863583] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:20.706 00:08:20.706 real 0m0.726s 00:08:20.706 user 0m0.471s 00:08:20.706 sys 0m0.205s 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:20.706 ************************************ 00:08:20.706 END TEST dd_invalid_input_count 00:08:20.706 ************************************ 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:20.706 ************************************ 00:08:20.706 START TEST dd_invalid_output_count 00:08:20.706 ************************************ 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1127 -- # invalid_output_count 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:20.706 10:31:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:20.706 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:08:20.706 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:20.706 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:20.706 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:20.706 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.706 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.706 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.706 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.706 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.706 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.706 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.706 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:20.706 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:20.706 { 00:08:20.706 "subsystems": [ 00:08:20.706 { 00:08:20.706 "subsystem": "bdev", 00:08:20.706 "config": [ 00:08:20.706 { 00:08:20.706 "params": { 00:08:20.706 "block_size": 512, 00:08:20.706 "num_blocks": 512, 00:08:20.706 "name": "malloc0" 00:08:20.706 }, 00:08:20.706 "method": "bdev_malloc_create" 00:08:20.706 }, 00:08:20.706 { 00:08:20.707 "method": "bdev_wait_for_examine" 00:08:20.707 } 00:08:20.707 ] 00:08:20.707 } 00:08:20.707 ] 00:08:20.707 } 00:08:20.707 [2024-11-15 10:31:46.070737] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:20.707 [2024-11-15 10:31:46.070868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62114 ] 00:08:20.965 [2024-11-15 10:31:46.221976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.965 [2024-11-15 10:31:46.302557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.965 [2024-11-15 10:31:46.377720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.965 [2024-11-15 10:31:46.446762] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:20.965 [2024-11-15 10:31:46.446832] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.224 [2024-11-15 10:31:46.617040] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:21.224 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:08:21.224 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:21.224 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:08:21.224 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:21.224 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:08:21.224 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:21.224 00:08:21.224 real 0m0.703s 00:08:21.224 user 0m0.457s 00:08:21.224 sys 0m0.205s 00:08:21.224 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:21.224 10:31:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:21.224 ************************************ 00:08:21.224 END TEST dd_invalid_output_count 00:08:21.224 ************************************ 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:21.483 ************************************ 00:08:21.483 START TEST dd_bs_not_multiple 00:08:21.483 ************************************ 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1127 -- # bs_not_multiple 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.483 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.484 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.484 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.484 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.484 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:21.484 10:31:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:21.484 { 00:08:21.484 "subsystems": [ 00:08:21.484 { 00:08:21.484 "subsystem": "bdev", 00:08:21.484 "config": [ 00:08:21.484 { 00:08:21.484 "params": { 00:08:21.484 "block_size": 512, 00:08:21.484 "num_blocks": 512, 00:08:21.484 "name": "malloc0" 00:08:21.484 }, 00:08:21.484 "method": "bdev_malloc_create" 00:08:21.484 }, 00:08:21.484 { 00:08:21.484 "params": { 00:08:21.484 "block_size": 512, 00:08:21.484 "num_blocks": 512, 00:08:21.484 "name": "malloc1" 00:08:21.484 }, 00:08:21.484 "method": "bdev_malloc_create" 00:08:21.484 }, 00:08:21.484 { 00:08:21.484 "method": "bdev_wait_for_examine" 00:08:21.484 } 00:08:21.484 ] 00:08:21.484 } 00:08:21.484 ] 00:08:21.484 } 00:08:21.484 [2024-11-15 10:31:46.813233] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:21.484 [2024-11-15 10:31:46.813328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62145 ] 00:08:21.484 [2024-11-15 10:31:46.957846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.743 [2024-11-15 10:31:47.046813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.743 [2024-11-15 10:31:47.121164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.743 [2024-11-15 10:31:47.186780] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:21.743 [2024-11-15 10:31:47.186863] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.240 [2024-11-15 10:31:47.311605] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:22.240 10:31:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:08:22.240 10:31:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.240 10:31:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:08:22.240 10:31:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:08:22.240 10:31:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:08:22.240 10:31:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.240 00:08:22.240 real 0m0.622s 00:08:22.240 user 0m0.394s 00:08:22.240 sys 0m0.178s 00:08:22.240 10:31:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.240 10:31:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:22.240 ************************************ 00:08:22.240 END TEST dd_bs_not_multiple 00:08:22.240 ************************************ 00:08:22.240 00:08:22.240 real 0m7.835s 00:08:22.240 user 0m4.283s 00:08:22.240 sys 0m2.957s 00:08:22.240 10:31:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.240 10:31:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:22.240 ************************************ 00:08:22.240 END TEST spdk_dd_negative 00:08:22.240 ************************************ 00:08:22.240 00:08:22.240 real 1m22.146s 00:08:22.240 user 0m52.886s 00:08:22.240 sys 0m35.786s 00:08:22.240 10:31:47 spdk_dd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.240 10:31:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:22.240 ************************************ 00:08:22.240 END TEST spdk_dd 00:08:22.240 ************************************ 00:08:22.240 10:31:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:22.240 10:31:47 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:22.240 10:31:47 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:22.240 10:31:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:22.240 10:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.240 10:31:47 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:22.240 10:31:47 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:22.240 10:31:47 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:22.240 10:31:47 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:22.240 10:31:47 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:22.240 10:31:47 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:22.240 10:31:47 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:22.240 10:31:47 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:22.240 10:31:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.240 10:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.240 ************************************ 00:08:22.240 START TEST nvmf_tcp 00:08:22.240 ************************************ 00:08:22.240 10:31:47 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:22.240 * Looking for test storage... 00:08:22.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:22.240 10:31:47 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:22.240 10:31:47 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:22.240 10:31:47 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:22.240 10:31:47 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.240 10:31:47 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:22.240 10:31:47 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.240 10:31:47 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:22.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.240 --rc genhtml_branch_coverage=1 00:08:22.240 --rc genhtml_function_coverage=1 00:08:22.240 --rc genhtml_legend=1 00:08:22.240 --rc geninfo_all_blocks=1 00:08:22.240 --rc geninfo_unexecuted_blocks=1 00:08:22.240 00:08:22.240 ' 00:08:22.240 10:31:47 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:22.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.240 --rc genhtml_branch_coverage=1 00:08:22.240 --rc genhtml_function_coverage=1 00:08:22.240 --rc genhtml_legend=1 00:08:22.240 --rc geninfo_all_blocks=1 00:08:22.240 --rc geninfo_unexecuted_blocks=1 00:08:22.240 00:08:22.240 ' 00:08:22.241 10:31:47 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:22.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.241 --rc genhtml_branch_coverage=1 00:08:22.241 --rc genhtml_function_coverage=1 00:08:22.241 --rc genhtml_legend=1 00:08:22.241 --rc geninfo_all_blocks=1 00:08:22.241 --rc geninfo_unexecuted_blocks=1 00:08:22.241 00:08:22.241 ' 00:08:22.241 10:31:47 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:22.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.241 --rc genhtml_branch_coverage=1 00:08:22.241 --rc genhtml_function_coverage=1 00:08:22.241 --rc genhtml_legend=1 00:08:22.241 --rc geninfo_all_blocks=1 00:08:22.241 --rc geninfo_unexecuted_blocks=1 00:08:22.241 00:08:22.241 ' 00:08:22.241 10:31:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:22.241 10:31:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:22.241 10:31:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:22.241 10:31:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:22.241 10:31:47 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.241 10:31:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.241 ************************************ 00:08:22.241 START TEST nvmf_target_core 00:08:22.241 ************************************ 00:08:22.241 10:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:22.499 * Looking for test storage... 00:08:22.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.499 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:22.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.500 --rc genhtml_branch_coverage=1 00:08:22.500 --rc genhtml_function_coverage=1 00:08:22.500 --rc genhtml_legend=1 00:08:22.500 --rc geninfo_all_blocks=1 00:08:22.500 --rc geninfo_unexecuted_blocks=1 00:08:22.500 00:08:22.500 ' 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:22.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.500 --rc genhtml_branch_coverage=1 00:08:22.500 --rc genhtml_function_coverage=1 00:08:22.500 --rc genhtml_legend=1 00:08:22.500 --rc geninfo_all_blocks=1 00:08:22.500 --rc geninfo_unexecuted_blocks=1 00:08:22.500 00:08:22.500 ' 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:22.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.500 --rc genhtml_branch_coverage=1 00:08:22.500 --rc genhtml_function_coverage=1 00:08:22.500 --rc genhtml_legend=1 00:08:22.500 --rc geninfo_all_blocks=1 00:08:22.500 --rc geninfo_unexecuted_blocks=1 00:08:22.500 00:08:22.500 ' 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:22.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.500 --rc genhtml_branch_coverage=1 00:08:22.500 --rc genhtml_function_coverage=1 00:08:22.500 --rc genhtml_legend=1 00:08:22.500 --rc geninfo_all_blocks=1 00:08:22.500 --rc geninfo_unexecuted_blocks=1 00:08:22.500 00:08:22.500 ' 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.500 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.500 ************************************ 00:08:22.500 START TEST nvmf_host_management 00:08:22.500 ************************************ 00:08:22.500 10:31:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:22.760 * Looking for test storage... 00:08:22.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.760 --rc genhtml_branch_coverage=1 00:08:22.760 --rc genhtml_function_coverage=1 00:08:22.760 --rc genhtml_legend=1 00:08:22.760 --rc geninfo_all_blocks=1 00:08:22.760 --rc geninfo_unexecuted_blocks=1 00:08:22.760 00:08:22.760 ' 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.760 --rc genhtml_branch_coverage=1 00:08:22.760 --rc genhtml_function_coverage=1 00:08:22.760 --rc genhtml_legend=1 00:08:22.760 --rc geninfo_all_blocks=1 00:08:22.760 --rc geninfo_unexecuted_blocks=1 00:08:22.760 00:08:22.760 ' 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.760 --rc genhtml_branch_coverage=1 00:08:22.760 --rc genhtml_function_coverage=1 00:08:22.760 --rc genhtml_legend=1 00:08:22.760 --rc geninfo_all_blocks=1 00:08:22.760 --rc geninfo_unexecuted_blocks=1 00:08:22.760 00:08:22.760 ' 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.760 --rc genhtml_branch_coverage=1 00:08:22.760 --rc genhtml_function_coverage=1 00:08:22.760 --rc genhtml_legend=1 00:08:22.760 --rc geninfo_all_blocks=1 00:08:22.760 --rc geninfo_unexecuted_blocks=1 00:08:22.760 00:08:22.760 ' 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.760 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.761 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:22.761 Cannot find device "nvmf_init_br" 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:22.761 Cannot find device "nvmf_init_br2" 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:22.761 Cannot find device "nvmf_tgt_br" 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:22.761 Cannot find device "nvmf_tgt_br2" 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:22.761 Cannot find device "nvmf_init_br" 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:22.761 Cannot find device "nvmf_init_br2" 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:22.761 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:23.019 Cannot find device "nvmf_tgt_br" 00:08:23.019 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:23.020 Cannot find device "nvmf_tgt_br2" 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:23.020 Cannot find device "nvmf_br" 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:23.020 Cannot find device "nvmf_init_if" 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:23.020 Cannot find device "nvmf_init_if2" 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:23.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:23.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:23.020 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:23.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:23.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.154 ms 00:08:23.278 00:08:23.278 --- 10.0.0.3 ping statistics --- 00:08:23.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.278 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:23.278 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:23.278 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:08:23.278 00:08:23.278 --- 10.0.0.4 ping statistics --- 00:08:23.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.278 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:23.278 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:23.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:08:23.279 00:08:23.279 --- 10.0.0.1 ping statistics --- 00:08:23.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.279 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:23.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:08:23.279 00:08:23.279 --- 10.0.0.2 ping statistics --- 00:08:23.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.279 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62485 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62485 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62485 ']' 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:23.279 10:31:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.537 [2024-11-15 10:31:48.778377] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:23.537 [2024-11-15 10:31:48.778494] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.537 [2024-11-15 10:31:48.934179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.537 [2024-11-15 10:31:49.008840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.537 [2024-11-15 10:31:49.008909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.537 [2024-11-15 10:31:49.008924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.537 [2024-11-15 10:31:49.008935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.537 [2024-11-15 10:31:49.008944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.537 [2024-11-15 10:31:49.010129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.537 [2024-11-15 10:31:49.010280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.537 [2024-11-15 10:31:49.010390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:23.537 [2024-11-15 10:31:49.010443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.796 [2024-11-15 10:31:49.066956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.364 [2024-11-15 10:31:49.838342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:24.364 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.621 Malloc0 00:08:24.621 [2024-11-15 10:31:49.918698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62545 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62545 /var/tmp/bdevperf.sock 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62545 ']' 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.621 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.622 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.622 { 00:08:24.622 "params": { 00:08:24.622 "name": "Nvme$subsystem", 00:08:24.622 "trtype": "$TEST_TRANSPORT", 00:08:24.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.622 "adrfam": "ipv4", 00:08:24.622 "trsvcid": "$NVMF_PORT", 00:08:24.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.622 "hdgst": ${hdgst:-false}, 00:08:24.622 "ddgst": ${ddgst:-false} 00:08:24.622 }, 00:08:24.622 "method": "bdev_nvme_attach_controller" 00:08:24.622 } 00:08:24.622 EOF 00:08:24.622 )") 00:08:24.622 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:24.622 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:24.622 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:24.622 10:31:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.622 "params": { 00:08:24.622 "name": "Nvme0", 00:08:24.622 "trtype": "tcp", 00:08:24.622 "traddr": "10.0.0.3", 00:08:24.622 "adrfam": "ipv4", 00:08:24.622 "trsvcid": "4420", 00:08:24.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:24.622 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:24.622 "hdgst": false, 00:08:24.622 "ddgst": false 00:08:24.622 }, 00:08:24.622 "method": "bdev_nvme_attach_controller" 00:08:24.622 }' 00:08:24.622 [2024-11-15 10:31:50.032871] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:24.622 [2024-11-15 10:31:50.032985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62545 ] 00:08:24.879 [2024-11-15 10:31:50.244228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.879 [2024-11-15 10:31:50.322583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.137 [2024-11-15 10:31:50.384978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.137 Running I/O for 10 seconds... 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.704 10:31:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:25.704 [2024-11-15 10:31:51.153719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.704 [2024-11-15 10:31:51.153770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.704 [2024-11-15 10:31:51.153797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.704 [2024-11-15 10:31:51.153808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.704 [2024-11-15 10:31:51.153820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.704 [2024-11-15 10:31:51.153830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.704 [2024-11-15 10:31:51.153842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.704 [2024-11-15 10:31:51.153852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.704 [2024-11-15 10:31:51.153864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.704 [2024-11-15 10:31:51.153874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.704 [2024-11-15 10:31:51.153885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.704 [2024-11-15 10:31:51.153895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.704 [2024-11-15 10:31:51.153906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.153916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.153927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.153937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.153948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.153958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.153971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.153980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.153992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.705 [2024-11-15 10:31:51.154741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.705 [2024-11-15 10:31:51.154750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.154761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.154770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.154781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.154808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.154821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.154831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.154843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.154853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.154864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.154874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.154885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.154894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.154905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.154915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.154926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.154935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.154946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.154955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.154966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.154975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.154987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.154996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.155007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.155017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.155028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.155037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.155048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.155057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.155068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.155078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.155088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.155098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.155109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.155118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.155129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.706 [2024-11-15 10:31:51.155143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.155154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1debc00 is same with the state(6) to be set 00:08:25.706 [2024-11-15 10:31:51.155345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.706 [2024-11-15 10:31:51.155374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.155388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.706 [2024-11-15 10:31:51.155397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.155407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.706 [2024-11-15 10:31:51.155417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.155427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.706 [2024-11-15 10:31:51.155436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:25.706 [2024-11-15 10:31:51.155445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1decce0 is same with the state(6) to be set 00:08:25.706 [2024-11-15 10:31:51.156550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:25.706 task offset: 122880 on job bdev=Nvme0n1 fails 00:08:25.706 00:08:25.706 Latency(us) 00:08:25.706 [2024-11-15T10:31:51.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.706 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:25.706 Job: Nvme0n1 ended in about 0.65 seconds with error 00:08:25.706 Verification LBA range: start 0x0 length 0x400 00:08:25.706 Nvme0n1 : 0.65 1475.35 92.21 98.36 0.00 39559.16 2204.39 40513.16 00:08:25.706 [2024-11-15T10:31:51.204Z] =================================================================================================================== 00:08:25.706 [2024-11-15T10:31:51.204Z] Total : 1475.35 92.21 98.36 0.00 39559.16 2204.39 40513.16 00:08:25.706 [2024-11-15 10:31:51.158753] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.706 [2024-11-15 10:31:51.158782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1decce0 (9): Bad file descriptor 00:08:25.706 [2024-11-15 10:31:51.171212] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:27.081 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62545 00:08:27.081 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62545) - No such process 00:08:27.081 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:27.081 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:27.081 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:27.081 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:27.081 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:27.081 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:27.081 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:27.081 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:27.081 { 00:08:27.081 "params": { 00:08:27.081 "name": "Nvme$subsystem", 00:08:27.081 "trtype": "$TEST_TRANSPORT", 00:08:27.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.081 "adrfam": "ipv4", 00:08:27.081 "trsvcid": "$NVMF_PORT", 00:08:27.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.081 "hdgst": ${hdgst:-false}, 00:08:27.081 "ddgst": ${ddgst:-false} 00:08:27.081 }, 00:08:27.081 "method": "bdev_nvme_attach_controller" 00:08:27.081 } 00:08:27.081 EOF 00:08:27.081 )") 00:08:27.081 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:27.081 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:27.081 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:27.081 10:31:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:27.081 "params": { 00:08:27.081 "name": "Nvme0", 00:08:27.081 "trtype": "tcp", 00:08:27.081 "traddr": "10.0.0.3", 00:08:27.081 "adrfam": "ipv4", 00:08:27.081 "trsvcid": "4420", 00:08:27.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:27.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:27.081 "hdgst": false, 00:08:27.081 "ddgst": false 00:08:27.081 }, 00:08:27.081 "method": "bdev_nvme_attach_controller" 00:08:27.081 }' 00:08:27.081 [2024-11-15 10:31:52.216358] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:27.081 [2024-11-15 10:31:52.216474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62583 ] 00:08:27.081 [2024-11-15 10:31:52.369493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.081 [2024-11-15 10:31:52.437602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.081 [2024-11-15 10:31:52.503642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.340 Running I/O for 1 seconds... 00:08:28.274 1472.00 IOPS, 92.00 MiB/s 00:08:28.274 Latency(us) 00:08:28.274 [2024-11-15T10:31:53.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.274 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:28.274 Verification LBA range: start 0x0 length 0x400 00:08:28.274 Nvme0n1 : 1.00 1530.45 95.65 0.00 0.00 40982.47 6136.55 38368.35 00:08:28.274 [2024-11-15T10:31:53.772Z] =================================================================================================================== 00:08:28.274 [2024-11-15T10:31:53.772Z] Total : 1530.45 95.65 0.00 0.00 40982.47 6136.55 38368.35 00:08:28.533 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:28.533 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:28.533 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:28.533 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:28.533 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:28.533 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:28.533 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:28.533 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.533 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:28.533 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.533 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.534 rmmod nvme_tcp 00:08:28.534 rmmod nvme_fabrics 00:08:28.534 rmmod nvme_keyring 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62485 ']' 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62485 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 62485 ']' 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 62485 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62485 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:28.534 killing process with pid 62485 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62485' 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 62485 00:08:28.534 10:31:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 62485 00:08:28.793 [2024-11-15 10:31:54.186971] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:28.793 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:29.053 00:08:29.053 real 0m6.509s 00:08:29.053 user 0m23.798s 00:08:29.053 sys 0m1.622s 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:29.053 ************************************ 00:08:29.053 END TEST nvmf_host_management 00:08:29.053 ************************************ 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.053 ************************************ 00:08:29.053 START TEST nvmf_lvol 00:08:29.053 ************************************ 00:08:29.053 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:29.313 * Looking for test storage... 00:08:29.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:29.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.313 --rc genhtml_branch_coverage=1 00:08:29.313 --rc genhtml_function_coverage=1 00:08:29.313 --rc genhtml_legend=1 00:08:29.313 --rc geninfo_all_blocks=1 00:08:29.313 --rc geninfo_unexecuted_blocks=1 00:08:29.313 00:08:29.313 ' 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:29.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.313 --rc genhtml_branch_coverage=1 00:08:29.313 --rc genhtml_function_coverage=1 00:08:29.313 --rc genhtml_legend=1 00:08:29.313 --rc geninfo_all_blocks=1 00:08:29.313 --rc geninfo_unexecuted_blocks=1 00:08:29.313 00:08:29.313 ' 00:08:29.313 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:29.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.313 --rc genhtml_branch_coverage=1 00:08:29.313 --rc genhtml_function_coverage=1 00:08:29.313 --rc genhtml_legend=1 00:08:29.314 --rc geninfo_all_blocks=1 00:08:29.314 --rc geninfo_unexecuted_blocks=1 00:08:29.314 00:08:29.314 ' 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.314 --rc genhtml_branch_coverage=1 00:08:29.314 --rc genhtml_function_coverage=1 00:08:29.314 --rc genhtml_legend=1 00:08:29.314 --rc geninfo_all_blocks=1 00:08:29.314 --rc geninfo_unexecuted_blocks=1 00:08:29.314 00:08:29.314 ' 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.314 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:29.314 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:29.314 Cannot find device "nvmf_init_br" 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:29.315 Cannot find device "nvmf_init_br2" 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:29.315 Cannot find device "nvmf_tgt_br" 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.315 Cannot find device "nvmf_tgt_br2" 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:29.315 Cannot find device "nvmf_init_br" 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:29.315 Cannot find device "nvmf_init_br2" 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:29.315 Cannot find device "nvmf_tgt_br" 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:29.315 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:29.575 Cannot find device "nvmf_tgt_br2" 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:29.575 Cannot find device "nvmf_br" 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:29.575 Cannot find device "nvmf_init_if" 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:29.575 Cannot find device "nvmf_init_if2" 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:29.575 10:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:29.575 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:29.575 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:29.575 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:29.575 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:29.575 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:29.575 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:29.575 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:29.575 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:29.575 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:29.575 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:29.575 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:29.575 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:08:29.575 00:08:29.575 --- 10.0.0.3 ping statistics --- 00:08:29.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.575 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:29.575 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:29.575 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:29.575 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:08:29.575 00:08:29.576 --- 10.0.0.4 ping statistics --- 00:08:29.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.576 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:29.576 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:29.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:08:29.835 00:08:29.835 --- 10.0.0.1 ping statistics --- 00:08:29.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.835 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:29.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:08:29.835 00:08:29.835 --- 10.0.0.2 ping statistics --- 00:08:29.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.835 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62859 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62859 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 62859 ']' 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:29.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:29.835 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:29.835 [2024-11-15 10:31:55.163970] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:29.835 [2024-11-15 10:31:55.164062] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.835 [2024-11-15 10:31:55.314614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:30.094 [2024-11-15 10:31:55.383281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.094 [2024-11-15 10:31:55.383346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.094 [2024-11-15 10:31:55.383360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.094 [2024-11-15 10:31:55.383370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.094 [2024-11-15 10:31:55.383380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.094 [2024-11-15 10:31:55.384624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.094 [2024-11-15 10:31:55.384716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.094 [2024-11-15 10:31:55.384721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.094 [2024-11-15 10:31:55.441447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.094 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:30.094 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:08:30.094 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:30.095 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:30.095 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:30.095 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.095 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:30.354 [2024-11-15 10:31:55.846643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.613 10:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:30.872 10:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:30.872 10:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:31.130 10:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:31.130 10:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:31.389 10:31:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:31.649 10:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4d44999e-374d-460c-9918-f40403d27b80 00:08:31.649 10:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4d44999e-374d-460c-9918-f40403d27b80 lvol 20 00:08:31.907 10:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=914a2a22-fd3e-403b-adcd-1fbbf318e474 00:08:31.907 10:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:32.166 10:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 914a2a22-fd3e-403b-adcd-1fbbf318e474 00:08:32.424 10:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:32.682 [2024-11-15 10:31:58.136175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:32.682 10:31:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:32.941 10:31:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62927 00:08:32.941 10:31:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:32.941 10:31:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:34.317 10:31:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 914a2a22-fd3e-403b-adcd-1fbbf318e474 MY_SNAPSHOT 00:08:34.317 10:31:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b2ab3ad8-06e6-4706-8c86-452b31d5e1dc 00:08:34.317 10:31:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 914a2a22-fd3e-403b-adcd-1fbbf318e474 30 00:08:34.885 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone b2ab3ad8-06e6-4706-8c86-452b31d5e1dc MY_CLONE 00:08:35.143 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=03796fa0-0f5b-4009-b709-21db38c24302 00:08:35.143 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 03796fa0-0f5b-4009-b709-21db38c24302 00:08:35.710 10:32:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62927 00:08:43.821 Initializing NVMe Controllers 00:08:43.821 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:43.821 Controller IO queue size 128, less than required. 00:08:43.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.821 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:43.821 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:43.821 Initialization complete. Launching workers. 00:08:43.821 ======================================================== 00:08:43.821 Latency(us) 00:08:43.821 Device Information : IOPS MiB/s Average min max 00:08:43.821 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10573.10 41.30 12108.78 2363.52 65478.00 00:08:43.821 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10561.50 41.26 12117.92 3508.57 84663.33 00:08:43.821 ======================================================== 00:08:43.821 Total : 21134.60 82.56 12113.35 2363.52 84663.33 00:08:43.821 00:08:43.821 10:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:43.821 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 914a2a22-fd3e-403b-adcd-1fbbf318e474 00:08:44.080 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d44999e-374d-460c-9918-f40403d27b80 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.340 rmmod nvme_tcp 00:08:44.340 rmmod nvme_fabrics 00:08:44.340 rmmod nvme_keyring 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62859 ']' 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62859 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 62859 ']' 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 62859 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62859 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:44.340 killing process with pid 62859 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62859' 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 62859 00:08:44.340 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 62859 00:08:44.599 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.599 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.599 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.599 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:44.599 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:44.599 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.599 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.599 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.599 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:44.599 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:44.599 10:32:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:44.599 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:44.599 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.599 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:44.599 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:44.599 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:44.599 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:44.599 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:44.857 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:44.857 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:44.857 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.857 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.857 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:44.857 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.858 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.858 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.858 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:44.858 00:08:44.858 real 0m15.706s 00:08:44.858 user 1m4.832s 00:08:44.858 sys 0m4.314s 00:08:44.858 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.858 ************************************ 00:08:44.858 END TEST nvmf_lvol 00:08:44.858 ************************************ 00:08:44.858 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:44.858 10:32:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:44.858 10:32:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:44.858 10:32:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.858 10:32:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.858 ************************************ 00:08:44.858 START TEST nvmf_lvs_grow 00:08:44.858 ************************************ 00:08:44.858 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:44.858 * Looking for test storage... 00:08:44.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:45.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.117 --rc genhtml_branch_coverage=1 00:08:45.117 --rc genhtml_function_coverage=1 00:08:45.117 --rc genhtml_legend=1 00:08:45.117 --rc geninfo_all_blocks=1 00:08:45.117 --rc geninfo_unexecuted_blocks=1 00:08:45.117 00:08:45.117 ' 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:45.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.117 --rc genhtml_branch_coverage=1 00:08:45.117 --rc genhtml_function_coverage=1 00:08:45.117 --rc genhtml_legend=1 00:08:45.117 --rc geninfo_all_blocks=1 00:08:45.117 --rc geninfo_unexecuted_blocks=1 00:08:45.117 00:08:45.117 ' 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:45.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.117 --rc genhtml_branch_coverage=1 00:08:45.117 --rc genhtml_function_coverage=1 00:08:45.117 --rc genhtml_legend=1 00:08:45.117 --rc geninfo_all_blocks=1 00:08:45.117 --rc geninfo_unexecuted_blocks=1 00:08:45.117 00:08:45.117 ' 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:45.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.117 --rc genhtml_branch_coverage=1 00:08:45.117 --rc genhtml_function_coverage=1 00:08:45.117 --rc genhtml_legend=1 00:08:45.117 --rc geninfo_all_blocks=1 00:08:45.117 --rc geninfo_unexecuted_blocks=1 00:08:45.117 00:08:45.117 ' 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.117 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.118 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:45.118 Cannot find device "nvmf_init_br" 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:45.118 Cannot find device "nvmf_init_br2" 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:45.118 Cannot find device "nvmf_tgt_br" 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:45.118 Cannot find device "nvmf_tgt_br2" 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:45.118 Cannot find device "nvmf_init_br" 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:45.118 Cannot find device "nvmf_init_br2" 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:45.118 Cannot find device "nvmf_tgt_br" 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:45.118 Cannot find device "nvmf_tgt_br2" 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:45.118 Cannot find device "nvmf_br" 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:45.118 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:45.377 Cannot find device "nvmf_init_if" 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:45.377 Cannot find device "nvmf_init_if2" 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:45.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:45.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:45.377 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:45.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:45.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:08:45.378 00:08:45.378 --- 10.0.0.3 ping statistics --- 00:08:45.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.378 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:45.378 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:45.378 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:08:45.378 00:08:45.378 --- 10.0.0.4 ping statistics --- 00:08:45.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.378 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:45.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:45.378 00:08:45.378 --- 10.0.0.1 ping statistics --- 00:08:45.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.378 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:45.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:08:45.378 00:08:45.378 --- 10.0.0.2 ping statistics --- 00:08:45.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.378 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.378 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.637 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:45.637 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.637 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:45.637 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:45.637 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63305 00:08:45.637 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63305 00:08:45.637 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:45.637 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 63305 ']' 00:08:45.637 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.637 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:45.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.637 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.637 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:45.637 10:32:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:45.637 [2024-11-15 10:32:10.950037] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:45.637 [2024-11-15 10:32:10.950145] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.637 [2024-11-15 10:32:11.091547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.896 [2024-11-15 10:32:11.143540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.896 [2024-11-15 10:32:11.143615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.896 [2024-11-15 10:32:11.143642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.896 [2024-11-15 10:32:11.143650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.896 [2024-11-15 10:32:11.143657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.896 [2024-11-15 10:32:11.144093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.896 [2024-11-15 10:32:11.198966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.830 10:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:46.830 10:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:08:46.830 10:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:46.830 10:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:46.830 10:32:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:46.830 [2024-11-15 10:32:12.260430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:46.830 ************************************ 00:08:46.830 START TEST lvs_grow_clean 00:08:46.830 ************************************ 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:46.830 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.397 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:47.397 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:47.656 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0c34eee2-3190-4814-a6e1-1370d27abe45 00:08:47.656 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c34eee2-3190-4814-a6e1-1370d27abe45 00:08:47.656 10:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:47.914 10:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:47.914 10:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:47.914 10:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0c34eee2-3190-4814-a6e1-1370d27abe45 lvol 150 00:08:48.215 10:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c8943f8a-04b9-4f3b-8e20-5a5d87629f3a 00:08:48.215 10:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:48.215 10:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:48.478 [2024-11-15 10:32:13.820432] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:48.478 [2024-11-15 10:32:13.820534] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:48.478 true 00:08:48.478 10:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c34eee2-3190-4814-a6e1-1370d27abe45 00:08:48.478 10:32:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:48.737 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:48.737 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:48.997 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c8943f8a-04b9-4f3b-8e20-5a5d87629f3a 00:08:49.256 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:49.515 [2024-11-15 10:32:14.925765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:49.515 10:32:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:50.083 10:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63393 00:08:50.083 10:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:50.083 10:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:50.083 10:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63393 /var/tmp/bdevperf.sock 00:08:50.083 10:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 63393 ']' 00:08:50.083 10:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:50.083 10:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:50.083 10:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:50.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:50.083 10:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:50.083 10:32:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:50.083 [2024-11-15 10:32:15.346050] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:50.083 [2024-11-15 10:32:15.346164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63393 ] 00:08:50.083 [2024-11-15 10:32:15.495672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.083 [2024-11-15 10:32:15.569171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.341 [2024-11-15 10:32:15.629311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.908 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:50.908 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:08:50.908 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:51.167 Nvme0n1 00:08:51.425 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:51.425 [ 00:08:51.425 { 00:08:51.425 "name": "Nvme0n1", 00:08:51.425 "aliases": [ 00:08:51.425 "c8943f8a-04b9-4f3b-8e20-5a5d87629f3a" 00:08:51.425 ], 00:08:51.425 "product_name": "NVMe disk", 00:08:51.425 "block_size": 4096, 00:08:51.425 "num_blocks": 38912, 00:08:51.425 "uuid": "c8943f8a-04b9-4f3b-8e20-5a5d87629f3a", 00:08:51.425 "numa_id": -1, 00:08:51.425 "assigned_rate_limits": { 00:08:51.425 "rw_ios_per_sec": 0, 00:08:51.425 "rw_mbytes_per_sec": 0, 00:08:51.425 "r_mbytes_per_sec": 0, 00:08:51.425 "w_mbytes_per_sec": 0 00:08:51.425 }, 00:08:51.425 "claimed": false, 00:08:51.425 "zoned": false, 00:08:51.425 "supported_io_types": { 00:08:51.425 "read": true, 00:08:51.425 "write": true, 00:08:51.425 "unmap": true, 00:08:51.425 "flush": true, 00:08:51.425 "reset": true, 00:08:51.425 "nvme_admin": true, 00:08:51.425 "nvme_io": true, 00:08:51.425 "nvme_io_md": false, 00:08:51.425 "write_zeroes": true, 00:08:51.425 "zcopy": false, 00:08:51.425 "get_zone_info": false, 00:08:51.425 "zone_management": false, 00:08:51.425 "zone_append": false, 00:08:51.425 "compare": true, 00:08:51.425 "compare_and_write": true, 00:08:51.425 "abort": true, 00:08:51.425 "seek_hole": false, 00:08:51.425 "seek_data": false, 00:08:51.425 "copy": true, 00:08:51.426 "nvme_iov_md": false 00:08:51.426 }, 00:08:51.426 "memory_domains": [ 00:08:51.426 { 00:08:51.426 "dma_device_id": "system", 00:08:51.426 "dma_device_type": 1 00:08:51.426 } 00:08:51.426 ], 00:08:51.426 "driver_specific": { 00:08:51.426 "nvme": [ 00:08:51.426 { 00:08:51.426 "trid": { 00:08:51.426 "trtype": "TCP", 00:08:51.426 "adrfam": "IPv4", 00:08:51.426 "traddr": "10.0.0.3", 00:08:51.426 "trsvcid": "4420", 00:08:51.426 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:51.426 }, 00:08:51.426 "ctrlr_data": { 00:08:51.426 "cntlid": 1, 00:08:51.426 "vendor_id": "0x8086", 00:08:51.426 "model_number": "SPDK bdev Controller", 00:08:51.426 "serial_number": "SPDK0", 00:08:51.426 "firmware_revision": "25.01", 00:08:51.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:51.426 "oacs": { 00:08:51.426 "security": 0, 00:08:51.426 "format": 0, 00:08:51.426 "firmware": 0, 00:08:51.426 "ns_manage": 0 00:08:51.426 }, 00:08:51.426 "multi_ctrlr": true, 00:08:51.426 "ana_reporting": false 00:08:51.426 }, 00:08:51.426 "vs": { 00:08:51.426 "nvme_version": "1.3" 00:08:51.426 }, 00:08:51.426 "ns_data": { 00:08:51.426 "id": 1, 00:08:51.426 "can_share": true 00:08:51.426 } 00:08:51.426 } 00:08:51.426 ], 00:08:51.426 "mp_policy": "active_passive" 00:08:51.426 } 00:08:51.426 } 00:08:51.426 ] 00:08:51.684 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63418 00:08:51.684 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:51.684 10:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:51.684 Running I/O for 10 seconds... 00:08:52.619 Latency(us) 00:08:52.619 [2024-11-15T10:32:18.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.619 Nvme0n1 : 1.00 7165.00 27.99 0.00 0.00 0.00 0.00 0.00 00:08:52.619 [2024-11-15T10:32:18.117Z] =================================================================================================================== 00:08:52.619 [2024-11-15T10:32:18.117Z] Total : 7165.00 27.99 0.00 0.00 0.00 0.00 0.00 00:08:52.619 00:08:53.554 10:32:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0c34eee2-3190-4814-a6e1-1370d27abe45 00:08:53.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.812 Nvme0n1 : 2.00 7138.50 27.88 0.00 0.00 0.00 0.00 0.00 00:08:53.812 [2024-11-15T10:32:19.310Z] =================================================================================================================== 00:08:53.812 [2024-11-15T10:32:19.310Z] Total : 7138.50 27.88 0.00 0.00 0.00 0.00 0.00 00:08:53.812 00:08:53.812 true 00:08:53.812 10:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c34eee2-3190-4814-a6e1-1370d27abe45 00:08:53.812 10:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:54.070 10:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:54.070 10:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:54.070 10:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63418 00:08:54.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.636 Nvme0n1 : 3.00 7129.67 27.85 0.00 0.00 0.00 0.00 0.00 00:08:54.636 [2024-11-15T10:32:20.134Z] =================================================================================================================== 00:08:54.636 [2024-11-15T10:32:20.134Z] Total : 7129.67 27.85 0.00 0.00 0.00 0.00 0.00 00:08:54.636 00:08:55.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.573 Nvme0n1 : 4.00 7093.50 27.71 0.00 0.00 0.00 0.00 0.00 00:08:55.573 [2024-11-15T10:32:21.071Z] =================================================================================================================== 00:08:55.573 [2024-11-15T10:32:21.071Z] Total : 7093.50 27.71 0.00 0.00 0.00 0.00 0.00 00:08:55.573 00:08:56.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.947 Nvme0n1 : 5.00 7097.20 27.72 0.00 0.00 0.00 0.00 0.00 00:08:56.947 [2024-11-15T10:32:22.445Z] =================================================================================================================== 00:08:56.947 [2024-11-15T10:32:22.445Z] Total : 7097.20 27.72 0.00 0.00 0.00 0.00 0.00 00:08:56.947 00:08:57.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.879 Nvme0n1 : 6.00 6968.50 27.22 0.00 0.00 0.00 0.00 0.00 00:08:57.879 [2024-11-15T10:32:23.377Z] =================================================================================================================== 00:08:57.879 [2024-11-15T10:32:23.377Z] Total : 6968.50 27.22 0.00 0.00 0.00 0.00 0.00 00:08:57.879 00:08:58.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.816 Nvme0n1 : 7.00 6970.86 27.23 0.00 0.00 0.00 0.00 0.00 00:08:58.816 [2024-11-15T10:32:24.314Z] =================================================================================================================== 00:08:58.816 [2024-11-15T10:32:24.314Z] Total : 6970.86 27.23 0.00 0.00 0.00 0.00 0.00 00:08:58.816 00:08:59.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.753 Nvme0n1 : 8.00 6972.62 27.24 0.00 0.00 0.00 0.00 0.00 00:08:59.753 [2024-11-15T10:32:25.251Z] =================================================================================================================== 00:08:59.753 [2024-11-15T10:32:25.251Z] Total : 6972.62 27.24 0.00 0.00 0.00 0.00 0.00 00:08:59.753 00:09:00.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.688 Nvme0n1 : 9.00 6931.67 27.08 0.00 0.00 0.00 0.00 0.00 00:09:00.688 [2024-11-15T10:32:26.186Z] =================================================================================================================== 00:09:00.688 [2024-11-15T10:32:26.186Z] Total : 6931.67 27.08 0.00 0.00 0.00 0.00 0.00 00:09:00.688 00:09:01.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.622 Nvme0n1 : 10.00 6911.60 27.00 0.00 0.00 0.00 0.00 0.00 00:09:01.622 [2024-11-15T10:32:27.120Z] =================================================================================================================== 00:09:01.622 [2024-11-15T10:32:27.120Z] Total : 6911.60 27.00 0.00 0.00 0.00 0.00 0.00 00:09:01.622 00:09:01.622 00:09:01.622 Latency(us) 00:09:01.622 [2024-11-15T10:32:27.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.622 Nvme0n1 : 10.01 6920.15 27.03 0.00 0.00 18491.11 8757.99 137268.13 00:09:01.622 [2024-11-15T10:32:27.120Z] =================================================================================================================== 00:09:01.622 [2024-11-15T10:32:27.120Z] Total : 6920.15 27.03 0.00 0.00 18491.11 8757.99 137268.13 00:09:01.622 { 00:09:01.622 "results": [ 00:09:01.622 { 00:09:01.622 "job": "Nvme0n1", 00:09:01.622 "core_mask": "0x2", 00:09:01.622 "workload": "randwrite", 00:09:01.622 "status": "finished", 00:09:01.622 "queue_depth": 128, 00:09:01.622 "io_size": 4096, 00:09:01.622 "runtime": 10.006142, 00:09:01.622 "iops": 6920.149644088601, 00:09:01.622 "mibps": 27.0318345472211, 00:09:01.622 "io_failed": 0, 00:09:01.622 "io_timeout": 0, 00:09:01.622 "avg_latency_us": 18491.114851408194, 00:09:01.622 "min_latency_us": 8757.992727272727, 00:09:01.622 "max_latency_us": 137268.13090909092 00:09:01.622 } 00:09:01.622 ], 00:09:01.622 "core_count": 1 00:09:01.622 } 00:09:01.622 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63393 00:09:01.622 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 63393 ']' 00:09:01.622 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 63393 00:09:01.622 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:09:01.622 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:01.622 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63393 00:09:01.880 killing process with pid 63393 00:09:01.880 Received shutdown signal, test time was about 10.000000 seconds 00:09:01.880 00:09:01.880 Latency(us) 00:09:01.880 [2024-11-15T10:32:27.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.880 [2024-11-15T10:32:27.378Z] =================================================================================================================== 00:09:01.880 [2024-11-15T10:32:27.378Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:01.880 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:01.880 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:01.880 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63393' 00:09:01.880 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 63393 00:09:01.880 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 63393 00:09:01.880 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:02.138 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:02.707 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c34eee2-3190-4814-a6e1-1370d27abe45 00:09:02.707 10:32:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:02.707 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:02.707 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:02.707 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:02.985 [2024-11-15 10:32:28.418150] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:02.985 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c34eee2-3190-4814-a6e1-1370d27abe45 00:09:02.985 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:02.985 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c34eee2-3190-4814-a6e1-1370d27abe45 00:09:02.985 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:02.985 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.985 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:02.985 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.985 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:02.985 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.985 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:02.985 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:02.985 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c34eee2-3190-4814-a6e1-1370d27abe45 00:09:03.551 request: 00:09:03.551 { 00:09:03.551 "uuid": "0c34eee2-3190-4814-a6e1-1370d27abe45", 00:09:03.551 "method": "bdev_lvol_get_lvstores", 00:09:03.551 "req_id": 1 00:09:03.551 } 00:09:03.551 Got JSON-RPC error response 00:09:03.551 response: 00:09:03.551 { 00:09:03.551 "code": -19, 00:09:03.551 "message": "No such device" 00:09:03.551 } 00:09:03.551 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:03.551 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:03.551 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:03.551 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:03.551 10:32:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:03.810 aio_bdev 00:09:03.810 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c8943f8a-04b9-4f3b-8e20-5a5d87629f3a 00:09:03.810 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=c8943f8a-04b9-4f3b-8e20-5a5d87629f3a 00:09:03.810 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:03.810 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:09:03.810 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:03.810 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:03.810 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:04.069 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c8943f8a-04b9-4f3b-8e20-5a5d87629f3a -t 2000 00:09:04.328 [ 00:09:04.328 { 00:09:04.328 "name": "c8943f8a-04b9-4f3b-8e20-5a5d87629f3a", 00:09:04.328 "aliases": [ 00:09:04.328 "lvs/lvol" 00:09:04.328 ], 00:09:04.328 "product_name": "Logical Volume", 00:09:04.328 "block_size": 4096, 00:09:04.328 "num_blocks": 38912, 00:09:04.328 "uuid": "c8943f8a-04b9-4f3b-8e20-5a5d87629f3a", 00:09:04.328 "assigned_rate_limits": { 00:09:04.328 "rw_ios_per_sec": 0, 00:09:04.328 "rw_mbytes_per_sec": 0, 00:09:04.328 "r_mbytes_per_sec": 0, 00:09:04.328 "w_mbytes_per_sec": 0 00:09:04.328 }, 00:09:04.328 "claimed": false, 00:09:04.328 "zoned": false, 00:09:04.328 "supported_io_types": { 00:09:04.328 "read": true, 00:09:04.328 "write": true, 00:09:04.328 "unmap": true, 00:09:04.328 "flush": false, 00:09:04.328 "reset": true, 00:09:04.328 "nvme_admin": false, 00:09:04.328 "nvme_io": false, 00:09:04.328 "nvme_io_md": false, 00:09:04.328 "write_zeroes": true, 00:09:04.328 "zcopy": false, 00:09:04.328 "get_zone_info": false, 00:09:04.328 "zone_management": false, 00:09:04.328 "zone_append": false, 00:09:04.328 "compare": false, 00:09:04.328 "compare_and_write": false, 00:09:04.328 "abort": false, 00:09:04.328 "seek_hole": true, 00:09:04.328 "seek_data": true, 00:09:04.328 "copy": false, 00:09:04.328 "nvme_iov_md": false 00:09:04.328 }, 00:09:04.328 "driver_specific": { 00:09:04.328 "lvol": { 00:09:04.328 "lvol_store_uuid": "0c34eee2-3190-4814-a6e1-1370d27abe45", 00:09:04.328 "base_bdev": "aio_bdev", 00:09:04.328 "thin_provision": false, 00:09:04.328 "num_allocated_clusters": 38, 00:09:04.328 "snapshot": false, 00:09:04.328 "clone": false, 00:09:04.328 "esnap_clone": false 00:09:04.328 } 00:09:04.328 } 00:09:04.328 } 00:09:04.328 ] 00:09:04.328 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:09:04.328 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c34eee2-3190-4814-a6e1-1370d27abe45 00:09:04.328 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:04.587 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:04.587 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c34eee2-3190-4814-a6e1-1370d27abe45 00:09:04.587 10:32:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:04.845 10:32:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:04.845 10:32:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c8943f8a-04b9-4f3b-8e20-5a5d87629f3a 00:09:05.103 10:32:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c34eee2-3190-4814-a6e1-1370d27abe45 00:09:05.361 10:32:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:06.001 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:06.259 ************************************ 00:09:06.259 END TEST lvs_grow_clean 00:09:06.259 ************************************ 00:09:06.259 00:09:06.259 real 0m19.271s 00:09:06.259 user 0m18.198s 00:09:06.259 sys 0m2.662s 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:06.259 ************************************ 00:09:06.259 START TEST lvs_grow_dirty 00:09:06.259 ************************************ 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:06.259 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:06.517 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:06.517 10:32:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:07.083 10:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:07.083 10:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:07.083 10:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:07.341 10:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:07.341 10:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:07.341 10:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 lvol 150 00:09:07.600 10:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=40be1b20-7c71-4c8b-a72b-0288b4903873 00:09:07.600 10:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:07.600 10:32:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:07.860 [2024-11-15 10:32:33.196602] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:07.860 [2024-11-15 10:32:33.196695] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:07.860 true 00:09:07.860 10:32:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:07.860 10:32:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:08.118 10:32:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:08.118 10:32:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:08.377 10:32:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 40be1b20-7c71-4c8b-a72b-0288b4903873 00:09:08.635 10:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:08.894 [2024-11-15 10:32:34.381339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:09.152 10:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:09.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:09.505 10:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63675 00:09:09.505 10:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:09.505 10:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:09.505 10:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63675 /var/tmp/bdevperf.sock 00:09:09.505 10:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63675 ']' 00:09:09.505 10:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:09.505 10:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:09.505 10:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:09.505 10:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:09.505 10:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:09.505 [2024-11-15 10:32:34.781508] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:09:09.505 [2024-11-15 10:32:34.781961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63675 ] 00:09:09.505 [2024-11-15 10:32:34.928228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.786 [2024-11-15 10:32:35.021096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.786 [2024-11-15 10:32:35.079033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.354 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:10.354 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:09:10.354 10:32:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:10.921 Nvme0n1 00:09:10.921 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:10.921 [ 00:09:10.921 { 00:09:10.921 "name": "Nvme0n1", 00:09:10.921 "aliases": [ 00:09:10.921 "40be1b20-7c71-4c8b-a72b-0288b4903873" 00:09:10.921 ], 00:09:10.921 "product_name": "NVMe disk", 00:09:10.921 "block_size": 4096, 00:09:10.921 "num_blocks": 38912, 00:09:10.921 "uuid": "40be1b20-7c71-4c8b-a72b-0288b4903873", 00:09:10.921 "numa_id": -1, 00:09:10.921 "assigned_rate_limits": { 00:09:10.921 "rw_ios_per_sec": 0, 00:09:10.921 "rw_mbytes_per_sec": 0, 00:09:10.921 "r_mbytes_per_sec": 0, 00:09:10.921 "w_mbytes_per_sec": 0 00:09:10.921 }, 00:09:10.921 "claimed": false, 00:09:10.921 "zoned": false, 00:09:10.921 "supported_io_types": { 00:09:10.921 "read": true, 00:09:10.921 "write": true, 00:09:10.921 "unmap": true, 00:09:10.921 "flush": true, 00:09:10.921 "reset": true, 00:09:10.921 "nvme_admin": true, 00:09:10.921 "nvme_io": true, 00:09:10.921 "nvme_io_md": false, 00:09:10.921 "write_zeroes": true, 00:09:10.921 "zcopy": false, 00:09:10.921 "get_zone_info": false, 00:09:10.921 "zone_management": false, 00:09:10.921 "zone_append": false, 00:09:10.921 "compare": true, 00:09:10.921 "compare_and_write": true, 00:09:10.921 "abort": true, 00:09:10.921 "seek_hole": false, 00:09:10.921 "seek_data": false, 00:09:10.921 "copy": true, 00:09:10.921 "nvme_iov_md": false 00:09:10.921 }, 00:09:10.921 "memory_domains": [ 00:09:10.921 { 00:09:10.921 "dma_device_id": "system", 00:09:10.921 "dma_device_type": 1 00:09:10.921 } 00:09:10.921 ], 00:09:10.921 "driver_specific": { 00:09:10.921 "nvme": [ 00:09:10.921 { 00:09:10.921 "trid": { 00:09:10.921 "trtype": "TCP", 00:09:10.921 "adrfam": "IPv4", 00:09:10.921 "traddr": "10.0.0.3", 00:09:10.921 "trsvcid": "4420", 00:09:10.921 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:10.921 }, 00:09:10.921 "ctrlr_data": { 00:09:10.921 "cntlid": 1, 00:09:10.921 "vendor_id": "0x8086", 00:09:10.921 "model_number": "SPDK bdev Controller", 00:09:10.921 "serial_number": "SPDK0", 00:09:10.921 "firmware_revision": "25.01", 00:09:10.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:10.921 "oacs": { 00:09:10.921 "security": 0, 00:09:10.921 "format": 0, 00:09:10.921 "firmware": 0, 00:09:10.921 "ns_manage": 0 00:09:10.921 }, 00:09:10.921 "multi_ctrlr": true, 00:09:10.921 "ana_reporting": false 00:09:10.921 }, 00:09:10.921 "vs": { 00:09:10.921 "nvme_version": "1.3" 00:09:10.921 }, 00:09:10.921 "ns_data": { 00:09:10.921 "id": 1, 00:09:10.921 "can_share": true 00:09:10.921 } 00:09:10.921 } 00:09:10.921 ], 00:09:10.921 "mp_policy": "active_passive" 00:09:10.921 } 00:09:10.921 } 00:09:10.921 ] 00:09:10.921 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:10.921 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63704 00:09:10.921 10:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:11.242 Running I/O for 10 seconds... 00:09:12.178 Latency(us) 00:09:12.178 [2024-11-15T10:32:37.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.178 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:09:12.178 [2024-11-15T10:32:37.676Z] =================================================================================================================== 00:09:12.178 [2024-11-15T10:32:37.676Z] Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:09:12.178 00:09:13.111 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:13.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.111 Nvme0n1 : 2.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:09:13.111 [2024-11-15T10:32:38.609Z] =================================================================================================================== 00:09:13.111 [2024-11-15T10:32:38.609Z] Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:09:13.111 00:09:13.369 true 00:09:13.369 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:13.369 10:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:13.628 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:13.629 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:13.629 10:32:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63704 00:09:14.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.195 Nvme0n1 : 3.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:09:14.195 [2024-11-15T10:32:39.693Z] =================================================================================================================== 00:09:14.195 [2024-11-15T10:32:39.693Z] Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:09:14.195 00:09:15.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.130 Nvme0n1 : 4.00 7196.75 28.11 0.00 0.00 0.00 0.00 0.00 00:09:15.130 [2024-11-15T10:32:40.628Z] =================================================================================================================== 00:09:15.130 [2024-11-15T10:32:40.628Z] Total : 7196.75 28.11 0.00 0.00 0.00 0.00 0.00 00:09:15.130 00:09:16.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.505 Nvme0n1 : 5.00 6964.80 27.21 0.00 0.00 0.00 0.00 0.00 00:09:16.505 [2024-11-15T10:32:42.003Z] =================================================================================================================== 00:09:16.505 [2024-11-15T10:32:42.003Z] Total : 6964.80 27.21 0.00 0.00 0.00 0.00 0.00 00:09:16.505 00:09:17.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.074 Nvme0n1 : 6.00 6947.00 27.14 0.00 0.00 0.00 0.00 0.00 00:09:17.074 [2024-11-15T10:32:42.572Z] =================================================================================================================== 00:09:17.074 [2024-11-15T10:32:42.572Z] Total : 6947.00 27.14 0.00 0.00 0.00 0.00 0.00 00:09:17.074 00:09:18.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.450 Nvme0n1 : 7.00 6898.00 26.95 0.00 0.00 0.00 0.00 0.00 00:09:18.450 [2024-11-15T10:32:43.948Z] =================================================================================================================== 00:09:18.450 [2024-11-15T10:32:43.948Z] Total : 6898.00 26.95 0.00 0.00 0.00 0.00 0.00 00:09:18.450 00:09:19.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.383 Nvme0n1 : 8.00 6861.25 26.80 0.00 0.00 0.00 0.00 0.00 00:09:19.383 [2024-11-15T10:32:44.881Z] =================================================================================================================== 00:09:19.383 [2024-11-15T10:32:44.881Z] Total : 6861.25 26.80 0.00 0.00 0.00 0.00 0.00 00:09:19.383 00:09:20.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.320 Nvme0n1 : 9.00 6846.78 26.75 0.00 0.00 0.00 0.00 0.00 00:09:20.320 [2024-11-15T10:32:45.818Z] =================================================================================================================== 00:09:20.320 [2024-11-15T10:32:45.818Z] Total : 6846.78 26.75 0.00 0.00 0.00 0.00 0.00 00:09:20.320 00:09:21.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.255 Nvme0n1 : 10.00 6835.20 26.70 0.00 0.00 0.00 0.00 0.00 00:09:21.255 [2024-11-15T10:32:46.753Z] =================================================================================================================== 00:09:21.255 [2024-11-15T10:32:46.753Z] Total : 6835.20 26.70 0.00 0.00 0.00 0.00 0.00 00:09:21.255 00:09:21.255 00:09:21.255 Latency(us) 00:09:21.255 [2024-11-15T10:32:46.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.255 Nvme0n1 : 10.01 6840.52 26.72 0.00 0.00 18705.25 6970.65 147753.89 00:09:21.255 [2024-11-15T10:32:46.753Z] =================================================================================================================== 00:09:21.255 [2024-11-15T10:32:46.753Z] Total : 6840.52 26.72 0.00 0.00 18705.25 6970.65 147753.89 00:09:21.255 { 00:09:21.255 "results": [ 00:09:21.255 { 00:09:21.255 "job": "Nvme0n1", 00:09:21.255 "core_mask": "0x2", 00:09:21.255 "workload": "randwrite", 00:09:21.255 "status": "finished", 00:09:21.255 "queue_depth": 128, 00:09:21.255 "io_size": 4096, 00:09:21.255 "runtime": 10.010935, 00:09:21.255 "iops": 6840.519891498646, 00:09:21.255 "mibps": 26.720780826166585, 00:09:21.255 "io_failed": 0, 00:09:21.255 "io_timeout": 0, 00:09:21.255 "avg_latency_us": 18705.248570943077, 00:09:21.255 "min_latency_us": 6970.647272727273, 00:09:21.255 "max_latency_us": 147753.8909090909 00:09:21.255 } 00:09:21.255 ], 00:09:21.255 "core_count": 1 00:09:21.255 } 00:09:21.255 10:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63675 00:09:21.255 10:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 63675 ']' 00:09:21.255 10:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 63675 00:09:21.255 10:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:09:21.255 10:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:21.255 10:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63675 00:09:21.255 killing process with pid 63675 00:09:21.255 Received shutdown signal, test time was about 10.000000 seconds 00:09:21.255 00:09:21.255 Latency(us) 00:09:21.255 [2024-11-15T10:32:46.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.255 [2024-11-15T10:32:46.753Z] =================================================================================================================== 00:09:21.255 [2024-11-15T10:32:46.753Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:21.255 10:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:21.255 10:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:21.255 10:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63675' 00:09:21.255 10:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 63675 00:09:21.255 10:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 63675 00:09:21.514 10:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:21.773 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:22.032 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:22.032 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63305 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63305 00:09:22.291 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63305 Killed "${NVMF_APP[@]}" "$@" 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63837 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63837 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63837 ']' 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:22.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:22.291 10:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:22.631 [2024-11-15 10:32:47.794711] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:09:22.631 [2024-11-15 10:32:47.794826] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.631 [2024-11-15 10:32:47.944477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.631 [2024-11-15 10:32:48.001801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.631 [2024-11-15 10:32:48.001859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.631 [2024-11-15 10:32:48.001871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.631 [2024-11-15 10:32:48.001880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.631 [2024-11-15 10:32:48.001887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.631 [2024-11-15 10:32:48.002276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.631 [2024-11-15 10:32:48.057195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.901 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:22.901 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:09:22.901 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:22.901 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:22.901 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:22.901 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.901 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:23.160 [2024-11-15 10:32:48.418350] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:23.160 [2024-11-15 10:32:48.418650] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:23.160 [2024-11-15 10:32:48.418823] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:23.160 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:23.160 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 40be1b20-7c71-4c8b-a72b-0288b4903873 00:09:23.160 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=40be1b20-7c71-4c8b-a72b-0288b4903873 00:09:23.160 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:23.160 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:23.160 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:23.160 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:23.160 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:23.420 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 40be1b20-7c71-4c8b-a72b-0288b4903873 -t 2000 00:09:23.679 [ 00:09:23.679 { 00:09:23.679 "name": "40be1b20-7c71-4c8b-a72b-0288b4903873", 00:09:23.679 "aliases": [ 00:09:23.679 "lvs/lvol" 00:09:23.679 ], 00:09:23.679 "product_name": "Logical Volume", 00:09:23.679 "block_size": 4096, 00:09:23.679 "num_blocks": 38912, 00:09:23.679 "uuid": "40be1b20-7c71-4c8b-a72b-0288b4903873", 00:09:23.679 "assigned_rate_limits": { 00:09:23.679 "rw_ios_per_sec": 0, 00:09:23.679 "rw_mbytes_per_sec": 0, 00:09:23.679 "r_mbytes_per_sec": 0, 00:09:23.679 "w_mbytes_per_sec": 0 00:09:23.679 }, 00:09:23.679 "claimed": false, 00:09:23.679 "zoned": false, 00:09:23.679 "supported_io_types": { 00:09:23.679 "read": true, 00:09:23.679 "write": true, 00:09:23.679 "unmap": true, 00:09:23.679 "flush": false, 00:09:23.679 "reset": true, 00:09:23.679 "nvme_admin": false, 00:09:23.679 "nvme_io": false, 00:09:23.679 "nvme_io_md": false, 00:09:23.679 "write_zeroes": true, 00:09:23.679 "zcopy": false, 00:09:23.679 "get_zone_info": false, 00:09:23.679 "zone_management": false, 00:09:23.679 "zone_append": false, 00:09:23.679 "compare": false, 00:09:23.679 "compare_and_write": false, 00:09:23.679 "abort": false, 00:09:23.679 "seek_hole": true, 00:09:23.679 "seek_data": true, 00:09:23.679 "copy": false, 00:09:23.679 "nvme_iov_md": false 00:09:23.679 }, 00:09:23.679 "driver_specific": { 00:09:23.679 "lvol": { 00:09:23.679 "lvol_store_uuid": "0bdb03ac-87ec-4369-b07c-01a1582b0a02", 00:09:23.679 "base_bdev": "aio_bdev", 00:09:23.679 "thin_provision": false, 00:09:23.679 "num_allocated_clusters": 38, 00:09:23.679 "snapshot": false, 00:09:23.679 "clone": false, 00:09:23.679 "esnap_clone": false 00:09:23.679 } 00:09:23.679 } 00:09:23.679 } 00:09:23.679 ] 00:09:23.679 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:23.679 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:23.679 10:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:23.938 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:23.939 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:23.939 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:24.198 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:24.198 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:24.457 [2024-11-15 10:32:49.775839] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:24.457 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:24.457 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:24.457 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:24.457 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.457 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.457 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.457 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.457 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.457 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.457 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.457 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:24.457 10:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:24.716 request: 00:09:24.716 { 00:09:24.716 "uuid": "0bdb03ac-87ec-4369-b07c-01a1582b0a02", 00:09:24.716 "method": "bdev_lvol_get_lvstores", 00:09:24.716 "req_id": 1 00:09:24.716 } 00:09:24.716 Got JSON-RPC error response 00:09:24.716 response: 00:09:24.716 { 00:09:24.716 "code": -19, 00:09:24.716 "message": "No such device" 00:09:24.716 } 00:09:24.716 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:24.716 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:24.716 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:24.716 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:24.716 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.975 aio_bdev 00:09:24.975 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 40be1b20-7c71-4c8b-a72b-0288b4903873 00:09:24.975 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=40be1b20-7c71-4c8b-a72b-0288b4903873 00:09:24.975 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:24.975 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:24.975 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:24.975 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:24.975 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:25.233 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 40be1b20-7c71-4c8b-a72b-0288b4903873 -t 2000 00:09:25.492 [ 00:09:25.492 { 00:09:25.492 "name": "40be1b20-7c71-4c8b-a72b-0288b4903873", 00:09:25.493 "aliases": [ 00:09:25.493 "lvs/lvol" 00:09:25.493 ], 00:09:25.493 "product_name": "Logical Volume", 00:09:25.493 "block_size": 4096, 00:09:25.493 "num_blocks": 38912, 00:09:25.493 "uuid": "40be1b20-7c71-4c8b-a72b-0288b4903873", 00:09:25.493 "assigned_rate_limits": { 00:09:25.493 "rw_ios_per_sec": 0, 00:09:25.493 "rw_mbytes_per_sec": 0, 00:09:25.493 "r_mbytes_per_sec": 0, 00:09:25.493 "w_mbytes_per_sec": 0 00:09:25.493 }, 00:09:25.493 "claimed": false, 00:09:25.493 "zoned": false, 00:09:25.493 "supported_io_types": { 00:09:25.493 "read": true, 00:09:25.493 "write": true, 00:09:25.493 "unmap": true, 00:09:25.493 "flush": false, 00:09:25.493 "reset": true, 00:09:25.493 "nvme_admin": false, 00:09:25.493 "nvme_io": false, 00:09:25.493 "nvme_io_md": false, 00:09:25.493 "write_zeroes": true, 00:09:25.493 "zcopy": false, 00:09:25.493 "get_zone_info": false, 00:09:25.493 "zone_management": false, 00:09:25.493 "zone_append": false, 00:09:25.493 "compare": false, 00:09:25.493 "compare_and_write": false, 00:09:25.493 "abort": false, 00:09:25.493 "seek_hole": true, 00:09:25.493 "seek_data": true, 00:09:25.493 "copy": false, 00:09:25.493 "nvme_iov_md": false 00:09:25.493 }, 00:09:25.493 "driver_specific": { 00:09:25.493 "lvol": { 00:09:25.493 "lvol_store_uuid": "0bdb03ac-87ec-4369-b07c-01a1582b0a02", 00:09:25.493 "base_bdev": "aio_bdev", 00:09:25.493 "thin_provision": false, 00:09:25.493 "num_allocated_clusters": 38, 00:09:25.493 "snapshot": false, 00:09:25.493 "clone": false, 00:09:25.493 "esnap_clone": false 00:09:25.493 } 00:09:25.493 } 00:09:25.493 } 00:09:25.493 ] 00:09:25.493 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:25.493 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:25.493 10:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:25.751 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:25.751 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:25.751 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:26.009 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:26.009 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 40be1b20-7c71-4c8b-a72b-0288b4903873 00:09:26.267 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0bdb03ac-87ec-4369-b07c-01a1582b0a02 00:09:26.525 10:32:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:26.783 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:27.350 ************************************ 00:09:27.350 END TEST lvs_grow_dirty 00:09:27.350 ************************************ 00:09:27.350 00:09:27.350 real 0m20.981s 00:09:27.350 user 0m46.041s 00:09:27.350 sys 0m7.742s 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:27.350 nvmf_trace.0 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.350 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:27.351 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.351 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.351 rmmod nvme_tcp 00:09:27.610 rmmod nvme_fabrics 00:09:27.610 rmmod nvme_keyring 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63837 ']' 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63837 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 63837 ']' 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 63837 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63837 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63837' 00:09:27.610 killing process with pid 63837 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 63837 00:09:27.610 10:32:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 63837 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.869 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:28.129 00:09:28.129 real 0m43.127s 00:09:28.129 user 1m10.401s 00:09:28.129 sys 0m11.216s 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.129 ************************************ 00:09:28.129 END TEST nvmf_lvs_grow 00:09:28.129 ************************************ 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.129 ************************************ 00:09:28.129 START TEST nvmf_bdev_io_wait 00:09:28.129 ************************************ 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:28.129 * Looking for test storage... 00:09:28.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.129 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:28.388 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.388 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.388 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.388 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:28.388 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.388 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:28.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.388 --rc genhtml_branch_coverage=1 00:09:28.388 --rc genhtml_function_coverage=1 00:09:28.388 --rc genhtml_legend=1 00:09:28.388 --rc geninfo_all_blocks=1 00:09:28.388 --rc geninfo_unexecuted_blocks=1 00:09:28.388 00:09:28.388 ' 00:09:28.388 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:28.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.388 --rc genhtml_branch_coverage=1 00:09:28.388 --rc genhtml_function_coverage=1 00:09:28.388 --rc genhtml_legend=1 00:09:28.388 --rc geninfo_all_blocks=1 00:09:28.388 --rc geninfo_unexecuted_blocks=1 00:09:28.388 00:09:28.388 ' 00:09:28.388 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:28.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.388 --rc genhtml_branch_coverage=1 00:09:28.388 --rc genhtml_function_coverage=1 00:09:28.388 --rc genhtml_legend=1 00:09:28.388 --rc geninfo_all_blocks=1 00:09:28.388 --rc geninfo_unexecuted_blocks=1 00:09:28.388 00:09:28.388 ' 00:09:28.388 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:28.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.389 --rc genhtml_branch_coverage=1 00:09:28.389 --rc genhtml_function_coverage=1 00:09:28.389 --rc genhtml_legend=1 00:09:28.389 --rc geninfo_all_blocks=1 00:09:28.389 --rc geninfo_unexecuted_blocks=1 00:09:28.389 00:09:28.389 ' 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.389 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:28.389 Cannot find device "nvmf_init_br" 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:28.389 Cannot find device "nvmf_init_br2" 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:28.389 Cannot find device "nvmf_tgt_br" 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:28.389 Cannot find device "nvmf_tgt_br2" 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:28.389 Cannot find device "nvmf_init_br" 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:28.389 Cannot find device "nvmf_init_br2" 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:28.389 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:28.389 Cannot find device "nvmf_tgt_br" 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:28.390 Cannot find device "nvmf_tgt_br2" 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:28.390 Cannot find device "nvmf_br" 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:28.390 Cannot find device "nvmf_init_if" 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:28.390 Cannot find device "nvmf_init_if2" 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:28.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:28.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:28.390 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:28.649 10:32:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:28.649 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:28.649 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:09:28.649 00:09:28.649 --- 10.0.0.3 ping statistics --- 00:09:28.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.649 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:28.649 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:28.649 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:09:28.649 00:09:28.649 --- 10.0.0.4 ping statistics --- 00:09:28.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.649 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:28.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:09:28.649 00:09:28.649 --- 10.0.0.1 ping statistics --- 00:09:28.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.649 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:28.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:09:28.649 00:09:28.649 --- 10.0.0.2 ping statistics --- 00:09:28.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.649 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64201 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64201 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 64201 ']' 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:28.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:28.649 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.908 [2024-11-15 10:32:54.154406] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:09:28.908 [2024-11-15 10:32:54.154539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.908 [2024-11-15 10:32:54.305905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.908 [2024-11-15 10:32:54.375436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.908 [2024-11-15 10:32:54.375498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.908 [2024-11-15 10:32:54.375535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.908 [2024-11-15 10:32:54.375546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.908 [2024-11-15 10:32:54.375552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.908 [2024-11-15 10:32:54.376719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.908 [2024-11-15 10:32:54.376842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.908 [2024-11-15 10:32:54.376969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.908 [2024-11-15 10:32:54.376970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:29.166 [2024-11-15 10:32:54.550071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:29.166 [2024-11-15 10:32:54.566464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:29.166 Malloc0 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:29.166 [2024-11-15 10:32:54.627169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64234 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64236 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:29.166 { 00:09:29.166 "params": { 00:09:29.166 "name": "Nvme$subsystem", 00:09:29.166 "trtype": "$TEST_TRANSPORT", 00:09:29.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.166 "adrfam": "ipv4", 00:09:29.166 "trsvcid": "$NVMF_PORT", 00:09:29.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.166 "hdgst": ${hdgst:-false}, 00:09:29.166 "ddgst": ${ddgst:-false} 00:09:29.166 }, 00:09:29.166 "method": "bdev_nvme_attach_controller" 00:09:29.166 } 00:09:29.166 EOF 00:09:29.166 )") 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64238 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:29.166 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:29.167 { 00:09:29.167 "params": { 00:09:29.167 "name": "Nvme$subsystem", 00:09:29.167 "trtype": "$TEST_TRANSPORT", 00:09:29.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.167 "adrfam": "ipv4", 00:09:29.167 "trsvcid": "$NVMF_PORT", 00:09:29.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.167 "hdgst": ${hdgst:-false}, 00:09:29.167 "ddgst": ${ddgst:-false} 00:09:29.167 }, 00:09:29.167 "method": "bdev_nvme_attach_controller" 00:09:29.167 } 00:09:29.167 EOF 00:09:29.167 )") 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64241 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:29.167 { 00:09:29.167 "params": { 00:09:29.167 "name": "Nvme$subsystem", 00:09:29.167 "trtype": "$TEST_TRANSPORT", 00:09:29.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.167 "adrfam": "ipv4", 00:09:29.167 "trsvcid": "$NVMF_PORT", 00:09:29.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.167 "hdgst": ${hdgst:-false}, 00:09:29.167 "ddgst": ${ddgst:-false} 00:09:29.167 }, 00:09:29.167 "method": "bdev_nvme_attach_controller" 00:09:29.167 } 00:09:29.167 EOF 00:09:29.167 )") 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:29.167 "params": { 00:09:29.167 "name": "Nvme1", 00:09:29.167 "trtype": "tcp", 00:09:29.167 "traddr": "10.0.0.3", 00:09:29.167 "adrfam": "ipv4", 00:09:29.167 "trsvcid": "4420", 00:09:29.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.167 "hdgst": false, 00:09:29.167 "ddgst": false 00:09:29.167 }, 00:09:29.167 "method": "bdev_nvme_attach_controller" 00:09:29.167 }' 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:29.167 "params": { 00:09:29.167 "name": "Nvme1", 00:09:29.167 "trtype": "tcp", 00:09:29.167 "traddr": "10.0.0.3", 00:09:29.167 "adrfam": "ipv4", 00:09:29.167 "trsvcid": "4420", 00:09:29.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.167 "hdgst": false, 00:09:29.167 "ddgst": false 00:09:29.167 }, 00:09:29.167 "method": "bdev_nvme_attach_controller" 00:09:29.167 }' 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:29.167 { 00:09:29.167 "params": { 00:09:29.167 "name": "Nvme$subsystem", 00:09:29.167 "trtype": "$TEST_TRANSPORT", 00:09:29.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.167 "adrfam": "ipv4", 00:09:29.167 "trsvcid": "$NVMF_PORT", 00:09:29.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.167 "hdgst": ${hdgst:-false}, 00:09:29.167 "ddgst": ${ddgst:-false} 00:09:29.167 }, 00:09:29.167 "method": "bdev_nvme_attach_controller" 00:09:29.167 } 00:09:29.167 EOF 00:09:29.167 )") 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:29.167 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:29.167 "params": { 00:09:29.167 "name": "Nvme1", 00:09:29.167 "trtype": "tcp", 00:09:29.167 "traddr": "10.0.0.3", 00:09:29.167 "adrfam": "ipv4", 00:09:29.167 "trsvcid": "4420", 00:09:29.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.167 "hdgst": false, 00:09:29.167 "ddgst": false 00:09:29.167 }, 00:09:29.167 "method": "bdev_nvme_attach_controller" 00:09:29.167 }' 00:09:29.424 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:29.424 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:29.424 [2024-11-15 10:32:54.687873] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:09:29.424 [2024-11-15 10:32:54.687960] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:29.424 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:29.424 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:29.424 "params": { 00:09:29.424 "name": "Nvme1", 00:09:29.424 "trtype": "tcp", 00:09:29.424 "traddr": "10.0.0.3", 00:09:29.424 "adrfam": "ipv4", 00:09:29.424 "trsvcid": "4420", 00:09:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.424 "hdgst": false, 00:09:29.424 "ddgst": false 00:09:29.424 }, 00:09:29.424 "method": "bdev_nvme_attach_controller" 00:09:29.424 }' 00:09:29.424 10:32:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64234 00:09:29.424 [2024-11-15 10:32:54.707032] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:09:29.424 [2024-11-15 10:32:54.707122] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:29.424 [2024-11-15 10:32:54.724149] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:09:29.424 [2024-11-15 10:32:54.724247] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:29.424 [2024-11-15 10:32:54.726479] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:09:29.424 [2024-11-15 10:32:54.726573] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:29.424 [2024-11-15 10:32:54.908138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.682 [2024-11-15 10:32:54.964814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:29.682 [2024-11-15 10:32:54.978785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.682 [2024-11-15 10:32:54.980225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.682 [2024-11-15 10:32:55.036756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:29.682 [2024-11-15 10:32:55.050764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.682 [2024-11-15 10:32:55.055498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.682 [2024-11-15 10:32:55.111841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:29.682 Running I/O for 1 seconds... 00:09:29.682 [2024-11-15 10:32:55.125871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.682 [2024-11-15 10:32:55.130942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.940 Running I/O for 1 seconds... 00:09:29.940 [2024-11-15 10:32:55.186799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:29.940 [2024-11-15 10:32:55.200781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.940 Running I/O for 1 seconds... 00:09:29.940 Running I/O for 1 seconds... 00:09:30.952 9948.00 IOPS, 38.86 MiB/s 00:09:30.952 Latency(us) 00:09:30.952 [2024-11-15T10:32:56.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.952 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:30.952 Nvme1n1 : 1.01 10001.34 39.07 0.00 0.00 12740.84 4647.10 18469.24 00:09:30.952 [2024-11-15T10:32:56.450Z] =================================================================================================================== 00:09:30.952 [2024-11-15T10:32:56.450Z] Total : 10001.34 39.07 0.00 0.00 12740.84 4647.10 18469.24 00:09:30.952 7744.00 IOPS, 30.25 MiB/s 00:09:30.952 Latency(us) 00:09:30.953 [2024-11-15T10:32:56.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.953 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:30.953 Nvme1n1 : 1.01 7790.22 30.43 0.00 0.00 16336.36 9353.77 24903.68 00:09:30.953 [2024-11-15T10:32:56.451Z] =================================================================================================================== 00:09:30.953 [2024-11-15T10:32:56.451Z] Total : 7790.22 30.43 0.00 0.00 16336.36 9353.77 24903.68 00:09:30.953 8265.00 IOPS, 32.29 MiB/s 00:09:30.953 Latency(us) 00:09:30.953 [2024-11-15T10:32:56.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.953 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:30.953 Nvme1n1 : 1.01 8339.91 32.58 0.00 0.00 15278.04 7268.54 25141.99 00:09:30.953 [2024-11-15T10:32:56.451Z] =================================================================================================================== 00:09:30.953 [2024-11-15T10:32:56.451Z] Total : 8339.91 32.58 0.00 0.00 15278.04 7268.54 25141.99 00:09:30.953 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64236 00:09:30.953 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64238 00:09:30.953 172896.00 IOPS, 675.38 MiB/s 00:09:30.953 Latency(us) 00:09:30.953 [2024-11-15T10:32:56.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.953 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:30.953 Nvme1n1 : 1.00 172554.06 674.04 0.00 0.00 738.00 379.81 1966.08 00:09:30.953 [2024-11-15T10:32:56.451Z] =================================================================================================================== 00:09:30.953 [2024-11-15T10:32:56.451Z] Total : 172554.06 674.04 0.00 0.00 738.00 379.81 1966.08 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64241 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:31.211 rmmod nvme_tcp 00:09:31.211 rmmod nvme_fabrics 00:09:31.211 rmmod nvme_keyring 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64201 ']' 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64201 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 64201 ']' 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 64201 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64201 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64201' 00:09:31.211 killing process with pid 64201 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 64201 00:09:31.211 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 64201 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:31.470 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:31.729 10:32:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:31.729 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:31.729 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.729 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.729 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:31.729 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.729 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.729 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.729 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:31.729 00:09:31.729 real 0m3.670s 00:09:31.729 user 0m14.263s 00:09:31.729 sys 0m2.295s 00:09:31.730 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:31.730 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.730 ************************************ 00:09:31.730 END TEST nvmf_bdev_io_wait 00:09:31.730 ************************************ 00:09:31.730 10:32:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:31.730 10:32:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:31.730 10:32:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:31.730 10:32:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:31.730 ************************************ 00:09:31.730 START TEST nvmf_queue_depth 00:09:31.730 ************************************ 00:09:31.730 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:31.990 * Looking for test storage... 00:09:31.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:31.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.990 --rc genhtml_branch_coverage=1 00:09:31.990 --rc genhtml_function_coverage=1 00:09:31.990 --rc genhtml_legend=1 00:09:31.990 --rc geninfo_all_blocks=1 00:09:31.990 --rc geninfo_unexecuted_blocks=1 00:09:31.990 00:09:31.990 ' 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:31.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.990 --rc genhtml_branch_coverage=1 00:09:31.990 --rc genhtml_function_coverage=1 00:09:31.990 --rc genhtml_legend=1 00:09:31.990 --rc geninfo_all_blocks=1 00:09:31.990 --rc geninfo_unexecuted_blocks=1 00:09:31.990 00:09:31.990 ' 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:31.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.990 --rc genhtml_branch_coverage=1 00:09:31.990 --rc genhtml_function_coverage=1 00:09:31.990 --rc genhtml_legend=1 00:09:31.990 --rc geninfo_all_blocks=1 00:09:31.990 --rc geninfo_unexecuted_blocks=1 00:09:31.990 00:09:31.990 ' 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:31.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.990 --rc genhtml_branch_coverage=1 00:09:31.990 --rc genhtml_function_coverage=1 00:09:31.990 --rc genhtml_legend=1 00:09:31.990 --rc geninfo_all_blocks=1 00:09:31.990 --rc geninfo_unexecuted_blocks=1 00:09:31.990 00:09:31.990 ' 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.990 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:31.991 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:31.991 Cannot find device "nvmf_init_br" 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:31.991 Cannot find device "nvmf_init_br2" 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:31.991 Cannot find device "nvmf_tgt_br" 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.991 Cannot find device "nvmf_tgt_br2" 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:31.991 Cannot find device "nvmf_init_br" 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:31.991 Cannot find device "nvmf_init_br2" 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:31.991 Cannot find device "nvmf_tgt_br" 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:31.991 Cannot find device "nvmf_tgt_br2" 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:31.991 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:32.249 Cannot find device "nvmf_br" 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:32.249 Cannot find device "nvmf_init_if" 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:32.249 Cannot find device "nvmf_init_if2" 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:32.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:32.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:32.249 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:32.508 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:32.508 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:09:32.508 00:09:32.508 --- 10.0.0.3 ping statistics --- 00:09:32.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.508 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:32.508 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:32.508 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:09:32.508 00:09:32.508 --- 10.0.0.4 ping statistics --- 00:09:32.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.508 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:32.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:32.508 00:09:32.508 --- 10.0.0.1 ping statistics --- 00:09:32.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.508 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:32.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:09:32.508 00:09:32.508 --- 10.0.0.2 ping statistics --- 00:09:32.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.508 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64500 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64500 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64500 ']' 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:32.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.508 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:32.509 10:32:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:32.509 [2024-11-15 10:32:57.887733] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:09:32.509 [2024-11-15 10:32:57.887854] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.767 [2024-11-15 10:32:58.045318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.767 [2024-11-15 10:32:58.115141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.767 [2024-11-15 10:32:58.115200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.767 [2024-11-15 10:32:58.115214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.767 [2024-11-15 10:32:58.115225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.767 [2024-11-15 10:32:58.115234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.767 [2024-11-15 10:32:58.115753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.767 [2024-11-15 10:32:58.176971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.703 [2024-11-15 10:32:58.934630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.703 Malloc0 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.703 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.704 [2024-11-15 10:32:58.987606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64532 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64532 /var/tmp/bdevperf.sock 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64532 ']' 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:33.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:33.704 10:32:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.704 [2024-11-15 10:32:59.056429] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:09:33.704 [2024-11-15 10:32:59.056583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64532 ] 00:09:33.963 [2024-11-15 10:32:59.209816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.963 [2024-11-15 10:32:59.275875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.963 [2024-11-15 10:32:59.332249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.963 10:32:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:33.963 10:32:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:33.963 10:32:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:33.963 10:32:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.963 10:32:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.222 NVMe0n1 00:09:34.222 10:32:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.222 10:32:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:34.222 Running I/O for 10 seconds... 00:09:36.535 6151.00 IOPS, 24.03 MiB/s [2024-11-15T10:33:02.968Z] 6815.50 IOPS, 26.62 MiB/s [2024-11-15T10:33:03.904Z] 7231.00 IOPS, 28.25 MiB/s [2024-11-15T10:33:04.840Z] 7503.25 IOPS, 29.31 MiB/s [2024-11-15T10:33:05.775Z] 7641.40 IOPS, 29.85 MiB/s [2024-11-15T10:33:06.761Z] 7757.17 IOPS, 30.30 MiB/s [2024-11-15T10:33:07.695Z] 7855.29 IOPS, 30.68 MiB/s [2024-11-15T10:33:08.628Z] 7918.50 IOPS, 30.93 MiB/s [2024-11-15T10:33:10.003Z] 7976.56 IOPS, 31.16 MiB/s [2024-11-15T10:33:10.003Z] 8007.60 IOPS, 31.28 MiB/s 00:09:44.505 Latency(us) 00:09:44.505 [2024-11-15T10:33:10.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.505 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:44.505 Verification LBA range: start 0x0 length 0x4000 00:09:44.505 NVMe0n1 : 10.09 8037.93 31.40 0.00 0.00 126800.20 27405.96 100091.35 00:09:44.505 [2024-11-15T10:33:10.003Z] =================================================================================================================== 00:09:44.505 [2024-11-15T10:33:10.003Z] Total : 8037.93 31.40 0.00 0.00 126800.20 27405.96 100091.35 00:09:44.505 { 00:09:44.505 "results": [ 00:09:44.505 { 00:09:44.505 "job": "NVMe0n1", 00:09:44.505 "core_mask": "0x1", 00:09:44.505 "workload": "verify", 00:09:44.505 "status": "finished", 00:09:44.505 "verify_range": { 00:09:44.505 "start": 0, 00:09:44.505 "length": 16384 00:09:44.505 }, 00:09:44.505 "queue_depth": 1024, 00:09:44.505 "io_size": 4096, 00:09:44.505 "runtime": 10.0878, 00:09:44.505 "iops": 8037.927000931819, 00:09:44.505 "mibps": 31.398152347389917, 00:09:44.505 "io_failed": 0, 00:09:44.505 "io_timeout": 0, 00:09:44.505 "avg_latency_us": 126800.2013800557, 00:09:44.505 "min_latency_us": 27405.963636363635, 00:09:44.505 "max_latency_us": 100091.34545454546 00:09:44.505 } 00:09:44.505 ], 00:09:44.505 "core_count": 1 00:09:44.505 } 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64532 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64532 ']' 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64532 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64532 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:44.505 killing process with pid 64532 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64532' 00:09:44.505 Received shutdown signal, test time was about 10.000000 seconds 00:09:44.505 00:09:44.505 Latency(us) 00:09:44.505 [2024-11-15T10:33:10.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.505 [2024-11-15T10:33:10.003Z] =================================================================================================================== 00:09:44.505 [2024-11-15T10:33:10.003Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64532 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64532 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.505 10:33:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.764 rmmod nvme_tcp 00:09:44.764 rmmod nvme_fabrics 00:09:44.764 rmmod nvme_keyring 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64500 ']' 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64500 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64500 ']' 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64500 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64500 00:09:44.764 killing process with pid 64500 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64500' 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64500 00:09:44.764 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64500 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.023 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:45.282 00:09:45.282 real 0m13.401s 00:09:45.282 user 0m22.420s 00:09:45.282 sys 0m2.278s 00:09:45.282 ************************************ 00:09:45.282 END TEST nvmf_queue_depth 00:09:45.282 ************************************ 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.282 ************************************ 00:09:45.282 START TEST nvmf_target_multipath 00:09:45.282 ************************************ 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:45.282 * Looking for test storage... 00:09:45.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:45.282 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:45.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.542 --rc genhtml_branch_coverage=1 00:09:45.542 --rc genhtml_function_coverage=1 00:09:45.542 --rc genhtml_legend=1 00:09:45.542 --rc geninfo_all_blocks=1 00:09:45.542 --rc geninfo_unexecuted_blocks=1 00:09:45.542 00:09:45.542 ' 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:45.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.542 --rc genhtml_branch_coverage=1 00:09:45.542 --rc genhtml_function_coverage=1 00:09:45.542 --rc genhtml_legend=1 00:09:45.542 --rc geninfo_all_blocks=1 00:09:45.542 --rc geninfo_unexecuted_blocks=1 00:09:45.542 00:09:45.542 ' 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:45.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.542 --rc genhtml_branch_coverage=1 00:09:45.542 --rc genhtml_function_coverage=1 00:09:45.542 --rc genhtml_legend=1 00:09:45.542 --rc geninfo_all_blocks=1 00:09:45.542 --rc geninfo_unexecuted_blocks=1 00:09:45.542 00:09:45.542 ' 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:45.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.542 --rc genhtml_branch_coverage=1 00:09:45.542 --rc genhtml_function_coverage=1 00:09:45.542 --rc genhtml_legend=1 00:09:45.542 --rc geninfo_all_blocks=1 00:09:45.542 --rc geninfo_unexecuted_blocks=1 00:09:45.542 00:09:45.542 ' 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.542 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.543 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:45.543 Cannot find device "nvmf_init_br" 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:45.543 Cannot find device "nvmf_init_br2" 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:45.543 Cannot find device "nvmf_tgt_br" 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.543 Cannot find device "nvmf_tgt_br2" 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:45.543 Cannot find device "nvmf_init_br" 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:45.543 Cannot find device "nvmf_init_br2" 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:45.543 Cannot find device "nvmf_tgt_br" 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:45.543 Cannot find device "nvmf_tgt_br2" 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:45.543 Cannot find device "nvmf_br" 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:45.543 Cannot find device "nvmf_init_if" 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:45.543 Cannot find device "nvmf_init_if2" 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:45.543 10:33:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.543 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:45.543 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.543 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.543 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:45.543 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.543 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.543 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:45.802 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:45.802 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:09:45.802 00:09:45.802 --- 10.0.0.3 ping statistics --- 00:09:45.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.802 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:45.802 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:45.802 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:09:45.802 00:09:45.802 --- 10.0.0.4 ping statistics --- 00:09:45.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.802 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:45.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:45.802 00:09:45.802 --- 10.0.0.1 ping statistics --- 00:09:45.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.802 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:45.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:09:45.802 00:09:45.802 --- 10.0.0.2 ping statistics --- 00:09:45.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.802 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64902 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64902 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 64902 ']' 00:09:45.802 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.060 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:46.060 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.060 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:46.060 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:46.060 [2024-11-15 10:33:11.357855] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:09:46.060 [2024-11-15 10:33:11.358557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.060 [2024-11-15 10:33:11.510782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.317 [2024-11-15 10:33:11.588951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.318 [2024-11-15 10:33:11.589031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.318 [2024-11-15 10:33:11.589055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.318 [2024-11-15 10:33:11.589067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.318 [2024-11-15 10:33:11.589076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.318 [2024-11-15 10:33:11.590390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.318 [2024-11-15 10:33:11.590537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.318 [2024-11-15 10:33:11.590614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.318 [2024-11-15 10:33:11.590623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.318 [2024-11-15 10:33:11.652107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:46.318 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:46.318 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:09:46.318 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.318 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:46.318 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:46.318 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.318 10:33:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:46.884 [2024-11-15 10:33:12.122250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.884 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:47.142 Malloc0 00:09:47.142 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:47.400 10:33:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:47.966 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:48.225 [2024-11-15 10:33:13.468423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:48.225 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:48.483 [2024-11-15 10:33:13.788864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:48.483 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid=50e4d619-cecf-4dd2-989d-1336dee31d8f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:48.483 10:33:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid=50e4d619-cecf-4dd2-989d-1336dee31d8f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:48.742 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:48.742 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:09:48.742 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:48.742 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:48.742 10:33:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64995 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:50.782 10:33:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:50.782 [global] 00:09:50.782 thread=1 00:09:50.782 invalidate=1 00:09:50.782 rw=randrw 00:09:50.782 time_based=1 00:09:50.782 runtime=6 00:09:50.782 ioengine=libaio 00:09:50.782 direct=1 00:09:50.782 bs=4096 00:09:50.782 iodepth=128 00:09:50.782 norandommap=0 00:09:50.782 numjobs=1 00:09:50.782 00:09:50.782 verify_dump=1 00:09:50.782 verify_backlog=512 00:09:50.782 verify_state_save=0 00:09:50.782 do_verify=1 00:09:50.782 verify=crc32c-intel 00:09:50.782 [job0] 00:09:50.782 filename=/dev/nvme0n1 00:09:50.782 Could not set queue depth (nvme0n1) 00:09:51.040 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.040 fio-3.35 00:09:51.040 Starting 1 thread 00:09:51.976 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:51.976 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:52.235 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:52.235 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:52.235 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:52.235 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:52.235 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:52.235 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:52.235 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:52.235 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:52.235 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:52.235 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:52.235 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:52.235 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:52.235 10:33:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:52.803 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:53.074 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:53.074 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:53.074 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.074 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:53.074 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:53.074 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:53.074 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:53.074 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:53.074 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.074 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:53.074 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:53.074 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:53.074 10:33:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64995 00:09:57.277 00:09:57.277 job0: (groupid=0, jobs=1): err= 0: pid=65016: Fri Nov 15 10:33:22 2024 00:09:57.277 read: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(242MiB/6005msec) 00:09:57.277 slat (usec): min=4, max=6360, avg=56.74, stdev=229.26 00:09:57.277 clat (usec): min=1624, max=17324, avg=8442.89, stdev=1523.99 00:09:57.277 lat (usec): min=1631, max=17370, avg=8499.63, stdev=1528.38 00:09:57.277 clat percentiles (usec): 00:09:57.277 | 1.00th=[ 4293], 5.00th=[ 6259], 10.00th=[ 7177], 20.00th=[ 7701], 00:09:57.277 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:09:57.277 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[12125], 00:09:57.277 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14484], 99.95th=[16319], 00:09:57.277 | 99.99th=[16909] 00:09:57.277 bw ( KiB/s): min=14888, max=24840, per=51.62%, avg=21322.18, stdev=3265.92, samples=11 00:09:57.277 iops : min= 3722, max= 6210, avg=5330.55, stdev=816.48, samples=11 00:09:57.277 write: IOPS=6015, BW=23.5MiB/s (24.6MB/s)(127MiB/5419msec); 0 zone resets 00:09:57.277 slat (usec): min=14, max=8724, avg=65.91, stdev=167.90 00:09:57.277 clat (usec): min=1394, max=16686, avg=7315.13, stdev=1370.02 00:09:57.277 lat (usec): min=1442, max=16726, avg=7381.04, stdev=1374.99 00:09:57.277 clat percentiles (usec): 00:09:57.277 | 1.00th=[ 3326], 5.00th=[ 4228], 10.00th=[ 5407], 20.00th=[ 6849], 00:09:57.277 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7701], 00:09:57.277 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8717], 00:09:57.277 | 99.00th=[11469], 99.50th=[12256], 99.90th=[14484], 99.95th=[15664], 00:09:57.277 | 99.99th=[16581] 00:09:57.277 bw ( KiB/s): min=15600, max=24576, per=88.81%, avg=21370.64, stdev=3013.43, samples=11 00:09:57.277 iops : min= 3900, max= 6144, avg=5342.64, stdev=753.34, samples=11 00:09:57.277 lat (msec) : 2=0.02%, 4=1.72%, 10=92.28%, 20=5.98% 00:09:57.277 cpu : usr=5.73%, sys=21.40%, ctx=5536, majf=0, minf=90 00:09:57.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:57.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.277 issued rwts: total=62007,32599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.277 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.277 00:09:57.277 Run status group 0 (all jobs): 00:09:57.277 READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=242MiB (254MB), run=6005-6005msec 00:09:57.277 WRITE: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=127MiB (134MB), run=5419-5419msec 00:09:57.277 00:09:57.277 Disk stats (read/write): 00:09:57.277 nvme0n1: ios=61158/32013, merge=0/0, ticks=494085/219520, in_queue=713605, util=98.53% 00:09:57.277 10:33:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:57.277 10:33:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65099 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:57.844 10:33:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:57.844 [global] 00:09:57.844 thread=1 00:09:57.844 invalidate=1 00:09:57.844 rw=randrw 00:09:57.844 time_based=1 00:09:57.844 runtime=6 00:09:57.844 ioengine=libaio 00:09:57.844 direct=1 00:09:57.844 bs=4096 00:09:57.844 iodepth=128 00:09:57.844 norandommap=0 00:09:57.844 numjobs=1 00:09:57.844 00:09:57.844 verify_dump=1 00:09:57.844 verify_backlog=512 00:09:57.844 verify_state_save=0 00:09:57.844 do_verify=1 00:09:57.844 verify=crc32c-intel 00:09:57.844 [job0] 00:09:57.844 filename=/dev/nvme0n1 00:09:57.844 Could not set queue depth (nvme0n1) 00:09:57.844 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.844 fio-3.35 00:09:57.844 Starting 1 thread 00:09:58.781 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:59.039 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:59.299 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:59.299 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:59.299 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:59.299 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:59.299 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:59.299 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:59.299 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:59.299 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:59.299 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:59.299 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:59.299 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:59.299 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:59.299 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:59.558 10:33:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:59.817 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:59.817 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:59.817 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:59.817 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:59.817 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:59.817 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:59.817 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:59.817 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:59.817 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:59.817 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:59.817 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:59.817 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:59.817 10:33:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65099 00:10:04.003 00:10:04.003 job0: (groupid=0, jobs=1): err= 0: pid=65120: Fri Nov 15 10:33:29 2024 00:10:04.003 read: IOPS=11.4k, BW=44.4MiB/s (46.5MB/s)(267MiB/6004msec) 00:10:04.003 slat (usec): min=3, max=8028, avg=42.69, stdev=188.05 00:10:04.003 clat (usec): min=328, max=15474, avg=7625.38, stdev=1948.58 00:10:04.003 lat (usec): min=341, max=15504, avg=7668.07, stdev=1963.73 00:10:04.003 clat percentiles (usec): 00:10:04.003 | 1.00th=[ 3064], 5.00th=[ 4080], 10.00th=[ 4752], 20.00th=[ 5997], 00:10:04.003 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8291], 00:10:04.003 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[10552], 00:10:04.003 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13698], 99.95th=[13829], 00:10:04.003 | 99.99th=[14746] 00:10:04.003 bw ( KiB/s): min=14048, max=35992, per=54.90%, avg=24955.64, stdev=7011.96, samples=11 00:10:04.003 iops : min= 3512, max= 8998, avg=6238.91, stdev=1752.99, samples=11 00:10:04.003 write: IOPS=6800, BW=26.6MiB/s (27.9MB/s)(144MiB/5436msec); 0 zone resets 00:10:04.003 slat (usec): min=5, max=1824, avg=54.55, stdev=134.00 00:10:04.003 clat (usec): min=1117, max=14228, avg=6546.36, stdev=1798.44 00:10:04.003 lat (usec): min=1152, max=14777, avg=6600.91, stdev=1813.57 00:10:04.003 clat percentiles (usec): 00:10:04.003 | 1.00th=[ 2671], 5.00th=[ 3359], 10.00th=[ 3818], 20.00th=[ 4555], 00:10:04.003 | 30.00th=[ 5473], 40.00th=[ 6783], 50.00th=[ 7177], 60.00th=[ 7504], 00:10:04.003 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8586], 00:10:04.003 | 99.00th=[10552], 99.50th=[11600], 99.90th=[12780], 99.95th=[13304], 00:10:04.003 | 99.99th=[14091] 00:10:04.003 bw ( KiB/s): min=14264, max=36790, per=91.64%, avg=24928.55, stdev=6839.10, samples=11 00:10:04.003 iops : min= 3566, max= 9197, avg=6232.09, stdev=1709.69, samples=11 00:10:04.003 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:10:04.003 lat (msec) : 2=0.20%, 4=7.08%, 10=88.62%, 20=4.07% 00:10:04.003 cpu : usr=6.15%, sys=23.67%, ctx=6072, majf=0, minf=102 00:10:04.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:04.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.003 issued rwts: total=68232,36969,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.003 00:10:04.003 Run status group 0 (all jobs): 00:10:04.003 READ: bw=44.4MiB/s (46.5MB/s), 44.4MiB/s-44.4MiB/s (46.5MB/s-46.5MB/s), io=267MiB (279MB), run=6004-6004msec 00:10:04.003 WRITE: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=144MiB (151MB), run=5436-5436msec 00:10:04.003 00:10:04.003 Disk stats (read/write): 00:10:04.003 nvme0n1: ios=67411/36368, merge=0/0, ticks=490770/221727, in_queue=712497, util=98.61% 00:10:04.003 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:04.003 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.003 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:10:04.003 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.003 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:04.003 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.003 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:04.003 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:10:04.003 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:04.569 rmmod nvme_tcp 00:10:04.569 rmmod nvme_fabrics 00:10:04.569 rmmod nvme_keyring 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64902 ']' 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64902 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 64902 ']' 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 64902 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64902 00:10:04.569 killing process with pid 64902 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64902' 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 64902 00:10:04.569 10:33:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 64902 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:04.828 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:05.087 00:10:05.087 real 0m19.768s 00:10:05.087 user 1m13.513s 00:10:05.087 sys 0m9.818s 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:05.087 ************************************ 00:10:05.087 END TEST nvmf_target_multipath 00:10:05.087 ************************************ 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.087 ************************************ 00:10:05.087 START TEST nvmf_zcopy 00:10:05.087 ************************************ 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:05.087 * Looking for test storage... 00:10:05.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:10:05.087 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:05.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.347 --rc genhtml_branch_coverage=1 00:10:05.347 --rc genhtml_function_coverage=1 00:10:05.347 --rc genhtml_legend=1 00:10:05.347 --rc geninfo_all_blocks=1 00:10:05.347 --rc geninfo_unexecuted_blocks=1 00:10:05.347 00:10:05.347 ' 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:05.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.347 --rc genhtml_branch_coverage=1 00:10:05.347 --rc genhtml_function_coverage=1 00:10:05.347 --rc genhtml_legend=1 00:10:05.347 --rc geninfo_all_blocks=1 00:10:05.347 --rc geninfo_unexecuted_blocks=1 00:10:05.347 00:10:05.347 ' 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:05.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.347 --rc genhtml_branch_coverage=1 00:10:05.347 --rc genhtml_function_coverage=1 00:10:05.347 --rc genhtml_legend=1 00:10:05.347 --rc geninfo_all_blocks=1 00:10:05.347 --rc geninfo_unexecuted_blocks=1 00:10:05.347 00:10:05.347 ' 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:05.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.347 --rc genhtml_branch_coverage=1 00:10:05.347 --rc genhtml_function_coverage=1 00:10:05.347 --rc genhtml_legend=1 00:10:05.347 --rc geninfo_all_blocks=1 00:10:05.347 --rc geninfo_unexecuted_blocks=1 00:10:05.347 00:10:05.347 ' 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.347 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.348 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:05.348 Cannot find device "nvmf_init_br" 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:05.348 Cannot find device "nvmf_init_br2" 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:05.348 Cannot find device "nvmf_tgt_br" 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:05.348 Cannot find device "nvmf_tgt_br2" 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:05.348 Cannot find device "nvmf_init_br" 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:05.348 Cannot find device "nvmf_init_br2" 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:05.348 Cannot find device "nvmf_tgt_br" 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:05.348 Cannot find device "nvmf_tgt_br2" 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:05.348 Cannot find device "nvmf_br" 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:05.348 Cannot find device "nvmf_init_if" 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:05.348 Cannot find device "nvmf_init_if2" 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:05.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:05.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:05.348 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:05.671 10:33:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:05.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:05.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:10:05.671 00:10:05.671 --- 10.0.0.3 ping statistics --- 00:10:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.671 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:05.671 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:05.671 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:10:05.671 00:10:05.671 --- 10.0.0.4 ping statistics --- 00:10:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.671 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:05.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:10:05.671 00:10:05.671 --- 10.0.0.1 ping statistics --- 00:10:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.671 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:05.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:10:05.671 00:10:05.671 --- 10.0.0.2 ping statistics --- 00:10:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.671 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:10:05.671 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65425 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65425 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 65425 ']' 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:05.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:05.672 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:05.956 [2024-11-15 10:33:31.159240] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:10:05.956 [2024-11-15 10:33:31.159356] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.956 [2024-11-15 10:33:31.298575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.956 [2024-11-15 10:33:31.360183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.956 [2024-11-15 10:33:31.360259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.956 [2024-11-15 10:33:31.360287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.956 [2024-11-15 10:33:31.360295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.956 [2024-11-15 10:33:31.360301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.956 [2024-11-15 10:33:31.360767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.956 [2024-11-15 10:33:31.418361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.216 [2024-11-15 10:33:31.543861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.216 [2024-11-15 10:33:31.560015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.216 malloc0 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:06.216 { 00:10:06.216 "params": { 00:10:06.216 "name": "Nvme$subsystem", 00:10:06.216 "trtype": "$TEST_TRANSPORT", 00:10:06.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.216 "adrfam": "ipv4", 00:10:06.216 "trsvcid": "$NVMF_PORT", 00:10:06.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.216 "hdgst": ${hdgst:-false}, 00:10:06.216 "ddgst": ${ddgst:-false} 00:10:06.216 }, 00:10:06.216 "method": "bdev_nvme_attach_controller" 00:10:06.216 } 00:10:06.216 EOF 00:10:06.216 )") 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:06.216 10:33:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:06.216 "params": { 00:10:06.216 "name": "Nvme1", 00:10:06.216 "trtype": "tcp", 00:10:06.216 "traddr": "10.0.0.3", 00:10:06.216 "adrfam": "ipv4", 00:10:06.216 "trsvcid": "4420", 00:10:06.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.216 "hdgst": false, 00:10:06.216 "ddgst": false 00:10:06.216 }, 00:10:06.216 "method": "bdev_nvme_attach_controller" 00:10:06.216 }' 00:10:06.216 [2024-11-15 10:33:31.662676] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:10:06.216 [2024-11-15 10:33:31.662785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65445 ] 00:10:06.475 [2024-11-15 10:33:31.817740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.475 [2024-11-15 10:33:31.889939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.475 [2024-11-15 10:33:31.960072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:06.733 Running I/O for 10 seconds... 00:10:08.605 5580.00 IOPS, 43.59 MiB/s [2024-11-15T10:33:35.099Z] 5710.50 IOPS, 44.61 MiB/s [2024-11-15T10:33:36.475Z] 5698.00 IOPS, 44.52 MiB/s [2024-11-15T10:33:37.411Z] 5665.50 IOPS, 44.26 MiB/s [2024-11-15T10:33:38.404Z] 5713.80 IOPS, 44.64 MiB/s [2024-11-15T10:33:39.340Z] 5725.17 IOPS, 44.73 MiB/s [2024-11-15T10:33:40.276Z] 5723.43 IOPS, 44.71 MiB/s [2024-11-15T10:33:41.210Z] 5746.00 IOPS, 44.89 MiB/s [2024-11-15T10:33:42.152Z] 5762.78 IOPS, 45.02 MiB/s [2024-11-15T10:33:42.152Z] 5766.90 IOPS, 45.05 MiB/s 00:10:16.654 Latency(us) 00:10:16.654 [2024-11-15T10:33:42.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.654 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:16.654 Verification LBA range: start 0x0 length 0x1000 00:10:16.654 Nvme1n1 : 10.02 5769.88 45.08 0.00 0.00 22114.45 2770.39 33840.41 00:10:16.654 [2024-11-15T10:33:42.152Z] =================================================================================================================== 00:10:16.654 [2024-11-15T10:33:42.152Z] Total : 5769.88 45.08 0.00 0.00 22114.45 2770.39 33840.41 00:10:16.914 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65568 00:10:16.914 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:16.914 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:16.914 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:16.914 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.914 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:16.914 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:16.914 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:16.914 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:16.914 { 00:10:16.914 "params": { 00:10:16.914 "name": "Nvme$subsystem", 00:10:16.914 "trtype": "$TEST_TRANSPORT", 00:10:16.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.914 "adrfam": "ipv4", 00:10:16.914 "trsvcid": "$NVMF_PORT", 00:10:16.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.914 "hdgst": ${hdgst:-false}, 00:10:16.914 "ddgst": ${ddgst:-false} 00:10:16.914 }, 00:10:16.914 "method": "bdev_nvme_attach_controller" 00:10:16.914 } 00:10:16.914 EOF 00:10:16.914 )") 00:10:16.914 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:16.914 [2024-11-15 10:33:42.318291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.914 [2024-11-15 10:33:42.318362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.914 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:16.914 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:16.914 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:16.914 "params": { 00:10:16.914 "name": "Nvme1", 00:10:16.914 "trtype": "tcp", 00:10:16.914 "traddr": "10.0.0.3", 00:10:16.914 "adrfam": "ipv4", 00:10:16.914 "trsvcid": "4420", 00:10:16.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:16.914 "hdgst": false, 00:10:16.914 "ddgst": false 00:10:16.914 }, 00:10:16.914 "method": "bdev_nvme_attach_controller" 00:10:16.914 }' 00:10:16.914 [2024-11-15 10:33:42.330271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.914 [2024-11-15 10:33:42.330331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.914 [2024-11-15 10:33:42.342253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.914 [2024-11-15 10:33:42.342300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.914 [2024-11-15 10:33:42.354255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.914 [2024-11-15 10:33:42.354301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.914 [2024-11-15 10:33:42.363687] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:10:16.914 [2024-11-15 10:33:42.363790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65568 ] 00:10:16.914 [2024-11-15 10:33:42.366272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.914 [2024-11-15 10:33:42.366307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.914 [2024-11-15 10:33:42.378267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.914 [2024-11-15 10:33:42.378297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.914 [2024-11-15 10:33:42.390269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.914 [2024-11-15 10:33:42.390300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.914 [2024-11-15 10:33:42.402272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.914 [2024-11-15 10:33:42.402302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.414273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.414304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.426271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.426315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.438286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.438332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.450273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.450316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.462276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.462318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.474287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.474333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.486294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.486322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.498291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.498320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.508558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.173 [2024-11-15 10:33:42.510295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.510322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.522315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.522350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.534308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.534342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.546309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.546341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.558305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.558335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.568711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.173 [2024-11-15 10:33:42.570318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.570528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.582344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.582535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.594352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.594562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.606349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.606581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.618355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.618615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.630355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.630584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.632714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:17.173 [2024-11-15 10:33:42.642358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.642560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.654371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.654604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-11-15 10:33:42.666361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-11-15 10:33:42.666538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.431 [2024-11-15 10:33:42.678353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.431 [2024-11-15 10:33:42.678534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.431 [2024-11-15 10:33:42.690379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.431 [2024-11-15 10:33:42.690601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.431 [2024-11-15 10:33:42.702376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.431 [2024-11-15 10:33:42.702600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.431 [2024-11-15 10:33:42.714383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.431 [2024-11-15 10:33:42.714574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.431 [2024-11-15 10:33:42.726390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.431 [2024-11-15 10:33:42.726550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.431 [2024-11-15 10:33:42.738401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.431 [2024-11-15 10:33:42.738558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.431 [2024-11-15 10:33:42.750413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.431 [2024-11-15 10:33:42.750581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.431 Running I/O for 5 seconds... 00:10:17.432 [2024-11-15 10:33:42.771161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.432 [2024-11-15 10:33:42.771217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.432 [2024-11-15 10:33:42.780758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.432 [2024-11-15 10:33:42.780807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.432 [2024-11-15 10:33:42.801730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.432 [2024-11-15 10:33:42.801768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.432 [2024-11-15 10:33:42.817713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.432 [2024-11-15 10:33:42.817746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.432 [2024-11-15 10:33:42.836724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.432 [2024-11-15 10:33:42.836762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.432 [2024-11-15 10:33:42.852247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.432 [2024-11-15 10:33:42.852431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.432 [2024-11-15 10:33:42.862380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.432 [2024-11-15 10:33:42.862417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.432 [2024-11-15 10:33:42.877742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.432 [2024-11-15 10:33:42.877779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.432 [2024-11-15 10:33:42.894039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.432 [2024-11-15 10:33:42.894091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.432 [2024-11-15 10:33:42.910750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.432 [2024-11-15 10:33:42.910983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.689 [2024-11-15 10:33:42.927158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.689 [2024-11-15 10:33:42.927197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.689 [2024-11-15 10:33:42.944818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.689 [2024-11-15 10:33:42.944859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.689 [2024-11-15 10:33:42.961279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.689 [2024-11-15 10:33:42.961318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:42.978731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:42.978768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:42.994945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:42.994985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:43.011632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:43.011675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:43.029084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:43.029149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:43.044087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:43.044132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:43.060025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:43.060063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:43.069355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:43.069395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:43.085902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:43.086073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:43.096707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:43.096860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:43.111277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:43.111336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:43.120965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:43.121009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:43.136664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:43.136704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:43.153004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:43.153043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.690 [2024-11-15 10:33:43.169844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.690 [2024-11-15 10:33:43.169886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.187160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.187200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.203248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.203287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.219317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.219354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.236720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.236756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.253242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.253289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.270506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.270581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.286160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.286198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.295512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.295580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.310975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.311213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.321838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.322112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.336856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.337065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.355205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.355243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.369816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.369851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.385727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.385765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.404822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.404860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.419917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.420085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.948 [2024-11-15 10:33:43.437457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.948 [2024-11-15 10:33:43.437545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.453481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.453558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.462960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.463120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.478056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.478227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.493856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.493911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.511485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.511539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.526263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.526301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.541626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.541661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.551046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.551083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.565677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.565713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.580269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.580307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.595461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.595498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.611671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.611709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.628712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.628748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.644791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.644828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.654337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.654373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.666213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.666251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.683151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.683241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.207 [2024-11-15 10:33:43.699080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.207 [2024-11-15 10:33:43.699126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.709637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.709672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.725473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.725525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.742928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.742966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.758086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.758157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 11309.00 IOPS, 88.35 MiB/s [2024-11-15T10:33:43.964Z] [2024-11-15 10:33:43.767783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.767821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.783672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.783708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.799279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.799345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.815024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.815076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.832966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.833015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.848292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.848327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.857171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.857205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.873473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.873507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.883979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.884029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.898681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.466 [2024-11-15 10:33:43.898718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.466 [2024-11-15 10:33:43.916428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.467 [2024-11-15 10:33:43.916489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.467 [2024-11-15 10:33:43.930275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.467 [2024-11-15 10:33:43.930327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.467 [2024-11-15 10:33:43.946196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.467 [2024-11-15 10:33:43.946249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.725 [2024-11-15 10:33:43.963964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.725 [2024-11-15 10:33:43.964069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.725 [2024-11-15 10:33:43.979377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:43.979449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:43.988869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:43.988947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:44.005308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:44.005383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:44.022000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:44.022058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:44.039176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:44.039252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:44.055493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:44.055582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:44.072109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:44.072191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:44.089135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:44.089192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:44.105172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:44.105243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:44.122056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:44.122117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:44.138694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:44.138748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:44.154604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:44.154671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:44.172203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:44.172261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:44.188758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:44.188815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.726 [2024-11-15 10:33:44.206039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.726 [2024-11-15 10:33:44.206117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.222222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.222322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.241461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.241546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.256053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.256138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.271889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.271953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.290644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.290692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.304624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.304673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.319692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.319738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.335076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.335146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.344607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.344646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.360510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.360593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.378617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.378655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.393568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.393617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.403778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.403819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.419672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.419733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.435799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.435852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.984 [2024-11-15 10:33:44.454631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.984 [2024-11-15 10:33:44.454667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.985 [2024-11-15 10:33:44.470334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.985 [2024-11-15 10:33:44.470399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.487692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.487766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.503893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.503963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.520835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.520912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.537614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.537664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.554794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.554845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.571521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.571582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.587422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.587489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.605786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.605852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.620747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.620800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.631103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.631153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.647158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.647215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.663477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.663561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.682051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.682109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.697647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.697706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.715928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.716023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.244 [2024-11-15 10:33:44.730440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.244 [2024-11-15 10:33:44.730551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.746579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.746671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 11246.50 IOPS, 87.86 MiB/s [2024-11-15T10:33:44.999Z] [2024-11-15 10:33:44.764811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.764866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.779809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.779872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.796295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.796372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.812612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.812681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.829720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.829823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.845133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.845189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.855188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.855250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.871339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.871400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.887947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.888011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.904662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.904729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.922132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.922175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.937698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.937737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.947309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.947361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.963675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.963711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.501 [2024-11-15 10:33:44.980767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.501 [2024-11-15 10:33:44.980823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.789 [2024-11-15 10:33:44.998187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.789 [2024-11-15 10:33:44.998254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.789 [2024-11-15 10:33:45.013728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.789 [2024-11-15 10:33:45.013779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.789 [2024-11-15 10:33:45.029852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.789 [2024-11-15 10:33:45.029892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.789 [2024-11-15 10:33:45.045715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.789 [2024-11-15 10:33:45.045768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.789 [2024-11-15 10:33:45.061995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.789 [2024-11-15 10:33:45.062058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.789 [2024-11-15 10:33:45.072029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.789 [2024-11-15 10:33:45.072081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.789 [2024-11-15 10:33:45.088737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.789 [2024-11-15 10:33:45.088789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.789 [2024-11-15 10:33:45.103863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.789 [2024-11-15 10:33:45.103900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.789 [2024-11-15 10:33:45.121444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.789 [2024-11-15 10:33:45.121492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.789 [2024-11-15 10:33:45.136796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.789 [2024-11-15 10:33:45.136837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-15 10:33:45.154480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-15 10:33:45.154554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-15 10:33:45.170922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-15 10:33:45.170981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-15 10:33:45.189269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-15 10:33:45.189334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-15 10:33:45.204227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-15 10:33:45.204280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-15 10:33:45.220267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-15 10:33:45.220328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-15 10:33:45.237395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-15 10:33:45.237463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-15 10:33:45.253714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-15 10:33:45.253774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-15 10:33:45.270340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-15 10:33:45.270410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-15 10:33:45.280535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-15 10:33:45.280599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.296461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.296547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.313018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.313071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.331200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.331265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.346536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.346616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.357106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.357150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.372672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.372728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.387275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.387343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.403321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.403401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.420047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.420120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.437061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.437117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.455150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.455221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.469693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.469760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.486568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.486640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.501818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.501870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.520264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.520333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.049 [2024-11-15 10:33:45.535225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.049 [2024-11-15 10:33:45.535295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.550697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.550761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.566966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.567045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.585852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.585893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.601056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.601094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.618983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.619034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.633792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.633861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.650310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.650366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.666984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.667037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.683303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.683366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.699158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.699210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.716002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.716056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.730545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.730607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.747598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.747645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 11239.33 IOPS, 87.81 MiB/s [2024-11-15T10:33:45.807Z] [2024-11-15 10:33:45.763302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.763360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.781443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.781522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.309 [2024-11-15 10:33:45.796776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.309 [2024-11-15 10:33:45.796814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:45.807460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:45.807582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:45.824182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:45.824236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:45.839033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:45.839098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:45.855304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:45.855362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:45.871210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:45.871270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:45.889430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:45.889481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:45.904018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:45.904068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:45.920806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:45.920858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:45.937588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:45.937636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:45.954941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:45.954995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:45.969088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:45.969132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:45.984229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:45.984286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:45.993745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:45.993783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:46.009827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:46.009871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:46.026671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:46.026727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:46.042604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:46.042659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.568 [2024-11-15 10:33:46.059284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.568 [2024-11-15 10:33:46.059347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.076647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.076698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.091683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.091747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.107614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.107674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.124970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.125022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.140298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.140374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.156330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.156395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.172861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.172924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.189733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.189804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.206162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.206251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.222793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.222875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.238774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.238835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.257192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.257230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.272622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.272674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.290665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.290716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.304234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.304269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.827 [2024-11-15 10:33:46.319879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.827 [2024-11-15 10:33:46.319945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.338566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.338622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.354306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.354353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.364393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.364445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.379809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.379847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.396547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.396584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.413139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.413177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.431310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.431347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.446529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.446592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.462697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.462747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.479400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.479456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.496489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.496557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.514938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.514992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.529474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.529551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.545793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.545844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.562315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.562381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.087 [2024-11-15 10:33:46.578850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.087 [2024-11-15 10:33:46.578903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.346 [2024-11-15 10:33:46.597063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.346 [2024-11-15 10:33:46.597101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.346 [2024-11-15 10:33:46.612580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.346 [2024-11-15 10:33:46.612635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.346 [2024-11-15 10:33:46.621662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.346 [2024-11-15 10:33:46.621713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.346 [2024-11-15 10:33:46.638109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.346 [2024-11-15 10:33:46.638160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.346 [2024-11-15 10:33:46.656339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.346 [2024-11-15 10:33:46.656390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.346 [2024-11-15 10:33:46.671684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.346 [2024-11-15 10:33:46.671719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.346 [2024-11-15 10:33:46.689707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.346 [2024-11-15 10:33:46.689742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.346 [2024-11-15 10:33:46.706366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.347 [2024-11-15 10:33:46.706437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.347 [2024-11-15 10:33:46.722955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.347 [2024-11-15 10:33:46.723008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.347 [2024-11-15 10:33:46.739344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.347 [2024-11-15 10:33:46.739398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.347 [2024-11-15 10:33:46.755558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.347 [2024-11-15 10:33:46.755622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.347 11243.75 IOPS, 87.84 MiB/s [2024-11-15T10:33:46.845Z] [2024-11-15 10:33:46.773766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.347 [2024-11-15 10:33:46.773836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.347 [2024-11-15 10:33:46.789171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.347 [2024-11-15 10:33:46.789211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.347 [2024-11-15 10:33:46.805975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.347 [2024-11-15 10:33:46.806013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.347 [2024-11-15 10:33:46.823916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.347 [2024-11-15 10:33:46.823970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.347 [2024-11-15 10:33:46.839041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.347 [2024-11-15 10:33:46.839092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:46.848790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:46.848843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:46.865396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:46.865465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:46.881242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:46.881294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:46.898636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:46.898703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:46.914644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:46.914697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:46.931915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:46.931966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:46.947268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:46.947320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:46.956879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:46.956924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:46.971913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:46.971965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:46.987922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:46.987974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:47.004786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:47.004838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:47.021715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:47.021768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:47.039129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:47.039181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:47.054682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:47.054734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:47.073289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:47.073327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.606 [2024-11-15 10:33:47.088248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.606 [2024-11-15 10:33:47.088287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.104723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.104756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.123038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.123095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.137678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.137719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.153839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.153891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.170997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.171063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.187386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.187438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.203910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.203957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.221018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.221056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.237373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.237443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.253851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.253904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.271132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.271192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.287650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.287686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.305776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.305813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.321076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.321113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.339996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.340049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.865 [2024-11-15 10:33:47.355206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.865 [2024-11-15 10:33:47.355266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.124 [2024-11-15 10:33:47.371816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.124 [2024-11-15 10:33:47.371850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.124 [2024-11-15 10:33:47.390029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.124 [2024-11-15 10:33:47.390097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.124 [2024-11-15 10:33:47.405089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.124 [2024-11-15 10:33:47.405125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.124 [2024-11-15 10:33:47.415273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.124 [2024-11-15 10:33:47.415325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.124 [2024-11-15 10:33:47.430738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.124 [2024-11-15 10:33:47.430793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.124 [2024-11-15 10:33:47.447770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.124 [2024-11-15 10:33:47.447820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.124 [2024-11-15 10:33:47.463920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.124 [2024-11-15 10:33:47.463976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.124 [2024-11-15 10:33:47.481147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.124 [2024-11-15 10:33:47.481184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.124 [2024-11-15 10:33:47.496676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.124 [2024-11-15 10:33:47.496712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.124 [2024-11-15 10:33:47.506209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.124 [2024-11-15 10:33:47.506259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.125 [2024-11-15 10:33:47.522232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.125 [2024-11-15 10:33:47.522266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.125 [2024-11-15 10:33:47.542796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.125 [2024-11-15 10:33:47.542848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.125 [2024-11-15 10:33:47.557976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.125 [2024-11-15 10:33:47.558026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.125 [2024-11-15 10:33:47.574493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.125 [2024-11-15 10:33:47.574556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.125 [2024-11-15 10:33:47.590511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.125 [2024-11-15 10:33:47.590588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.125 [2024-11-15 10:33:47.600190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.125 [2024-11-15 10:33:47.600239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.125 [2024-11-15 10:33:47.616236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.125 [2024-11-15 10:33:47.616289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.632640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.632690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.651127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.651179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.665352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.665403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.680536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.680597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.698954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.699022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.714721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.714757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.731205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.731256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.749405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.749472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 11301.20 IOPS, 88.29 MiB/s [2024-11-15T10:33:47.882Z] [2024-11-15 10:33:47.763676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.763735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 00:10:22.384 Latency(us) 00:10:22.384 [2024-11-15T10:33:47.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.384 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:22.384 Nvme1n1 : 5.01 11302.81 88.30 0.00 0.00 11309.21 4676.89 20137.43 00:10:22.384 [2024-11-15T10:33:47.882Z] =================================================================================================================== 00:10:22.384 [2024-11-15T10:33:47.882Z] Total : 11302.81 88.30 0.00 0.00 11309.21 4676.89 20137.43 00:10:22.384 [2024-11-15 10:33:47.773504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.773587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.785522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.785602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.797603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.797660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.809613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.809672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.821594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.821636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.833585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.833630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.845622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.845695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.857620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.857678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.384 [2024-11-15 10:33:47.869623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.384 [2024-11-15 10:33:47.869681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.643 [2024-11-15 10:33:47.881633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.643 [2024-11-15 10:33:47.881680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.643 [2024-11-15 10:33:47.893622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.643 [2024-11-15 10:33:47.893681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.643 [2024-11-15 10:33:47.905607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.643 [2024-11-15 10:33:47.905645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.643 [2024-11-15 10:33:47.917609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.643 [2024-11-15 10:33:47.917657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.643 [2024-11-15 10:33:47.929624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.643 [2024-11-15 10:33:47.929678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.643 [2024-11-15 10:33:47.941649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.643 [2024-11-15 10:33:47.941693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.644 [2024-11-15 10:33:47.953633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.644 [2024-11-15 10:33:47.953688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.644 [2024-11-15 10:33:47.965642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.644 [2024-11-15 10:33:47.965695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.644 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65568) - No such process 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65568 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.644 delay0 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.644 10:33:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:22.902 [2024-11-15 10:33:48.180861] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:29.562 Initializing NVMe Controllers 00:10:29.562 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:29.562 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:29.562 Initialization complete. Launching workers. 00:10:29.562 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 60 00:10:29.562 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 347, failed to submit 33 00:10:29.562 success 217, unsuccessful 130, failed 0 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.562 rmmod nvme_tcp 00:10:29.562 rmmod nvme_fabrics 00:10:29.562 rmmod nvme_keyring 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65425 ']' 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65425 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 65425 ']' 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 65425 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65425 00:10:29.562 killing process with pid 65425 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65425' 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 65425 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 65425 00:10:29.562 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:29.563 00:10:29.563 real 0m24.387s 00:10:29.563 user 0m39.691s 00:10:29.563 sys 0m6.961s 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.563 ************************************ 00:10:29.563 END TEST nvmf_zcopy 00:10:29.563 ************************************ 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.563 ************************************ 00:10:29.563 START TEST nvmf_nmic 00:10:29.563 ************************************ 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:29.563 * Looking for test storage... 00:10:29.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:10:29.563 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:29.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.822 --rc genhtml_branch_coverage=1 00:10:29.822 --rc genhtml_function_coverage=1 00:10:29.822 --rc genhtml_legend=1 00:10:29.822 --rc geninfo_all_blocks=1 00:10:29.822 --rc geninfo_unexecuted_blocks=1 00:10:29.822 00:10:29.822 ' 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:29.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.822 --rc genhtml_branch_coverage=1 00:10:29.822 --rc genhtml_function_coverage=1 00:10:29.822 --rc genhtml_legend=1 00:10:29.822 --rc geninfo_all_blocks=1 00:10:29.822 --rc geninfo_unexecuted_blocks=1 00:10:29.822 00:10:29.822 ' 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:29.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.822 --rc genhtml_branch_coverage=1 00:10:29.822 --rc genhtml_function_coverage=1 00:10:29.822 --rc genhtml_legend=1 00:10:29.822 --rc geninfo_all_blocks=1 00:10:29.822 --rc geninfo_unexecuted_blocks=1 00:10:29.822 00:10:29.822 ' 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:29.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.822 --rc genhtml_branch_coverage=1 00:10:29.822 --rc genhtml_function_coverage=1 00:10:29.822 --rc genhtml_legend=1 00:10:29.822 --rc geninfo_all_blocks=1 00:10:29.822 --rc geninfo_unexecuted_blocks=1 00:10:29.822 00:10:29.822 ' 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.822 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.823 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:29.823 Cannot find device "nvmf_init_br" 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:29.823 Cannot find device "nvmf_init_br2" 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:29.823 Cannot find device "nvmf_tgt_br" 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:29.823 Cannot find device "nvmf_tgt_br2" 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:29.823 Cannot find device "nvmf_init_br" 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:29.823 Cannot find device "nvmf_init_br2" 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:29.823 Cannot find device "nvmf_tgt_br" 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:29.823 Cannot find device "nvmf_tgt_br2" 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:29.823 Cannot find device "nvmf_br" 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:29.823 Cannot find device "nvmf_init_if" 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:29.823 Cannot find device "nvmf_init_if2" 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:29.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:29.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:29.823 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:30.082 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:30.082 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:10:30.082 00:10:30.082 --- 10.0.0.3 ping statistics --- 00:10:30.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.082 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:30.082 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:30.082 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:10:30.082 00:10:30.082 --- 10.0.0.4 ping statistics --- 00:10:30.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.082 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:30.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:10:30.082 00:10:30.082 --- 10.0.0.1 ping statistics --- 00:10:30.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.082 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:30.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:10:30.082 00:10:30.082 --- 10.0.0.2 ping statistics --- 00:10:30.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.082 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65943 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65943 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 65943 ']' 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:30.082 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.083 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:30.083 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.341 [2024-11-15 10:33:55.586645] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:10:30.341 [2024-11-15 10:33:55.586742] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.341 [2024-11-15 10:33:55.745387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.341 [2024-11-15 10:33:55.818254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.341 [2024-11-15 10:33:55.818326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.341 [2024-11-15 10:33:55.818341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.341 [2024-11-15 10:33:55.818351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.341 [2024-11-15 10:33:55.818361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.341 [2024-11-15 10:33:55.819585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.341 [2024-11-15 10:33:55.819646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.341 [2024-11-15 10:33:55.819783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.341 [2024-11-15 10:33:55.819792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.601 [2024-11-15 10:33:55.877555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:30.601 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:30.601 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:10:30.601 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.601 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:30.601 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.601 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.601 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.601 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.601 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.601 [2024-11-15 10:33:56.005205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.601 Malloc0 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.601 [2024-11-15 10:33:56.075306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.601 test case1: single bdev can't be used in multiple subsystems 00:10:30.601 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:30.602 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:30.602 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.602 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.602 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.602 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:30.602 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.602 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.866 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.867 [2024-11-15 10:33:56.103135] bdev.c:8502:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:30.867 [2024-11-15 10:33:56.103186] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:30.867 [2024-11-15 10:33:56.103202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.867 request: 00:10:30.867 { 00:10:30.867 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:30.867 "namespace": { 00:10:30.867 "bdev_name": "Malloc0", 00:10:30.867 "no_auto_visible": false, 00:10:30.867 "no_metadata": false 00:10:30.867 }, 00:10:30.867 "method": "nvmf_subsystem_add_ns", 00:10:30.867 "req_id": 1 00:10:30.867 } 00:10:30.867 Got JSON-RPC error response 00:10:30.867 response: 00:10:30.867 { 00:10:30.867 "code": -32602, 00:10:30.867 "message": "Invalid parameters" 00:10:30.867 } 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:30.867 Adding namespace failed - expected result. 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:30.867 test case2: host connect to nvmf target in multiple paths 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.867 [2024-11-15 10:33:56.115286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid=50e4d619-cecf-4dd2-989d-1336dee31d8f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:30.867 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid=50e4d619-cecf-4dd2-989d-1336dee31d8f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:31.125 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:31.125 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:10:31.125 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.125 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:31.125 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:10:33.025 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:33.025 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:33.025 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.025 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:33.025 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.025 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:10:33.025 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:33.025 [global] 00:10:33.025 thread=1 00:10:33.025 invalidate=1 00:10:33.025 rw=write 00:10:33.025 time_based=1 00:10:33.025 runtime=1 00:10:33.025 ioengine=libaio 00:10:33.025 direct=1 00:10:33.025 bs=4096 00:10:33.025 iodepth=1 00:10:33.025 norandommap=0 00:10:33.025 numjobs=1 00:10:33.025 00:10:33.025 verify_dump=1 00:10:33.025 verify_backlog=512 00:10:33.025 verify_state_save=0 00:10:33.025 do_verify=1 00:10:33.025 verify=crc32c-intel 00:10:33.025 [job0] 00:10:33.025 filename=/dev/nvme0n1 00:10:33.025 Could not set queue depth (nvme0n1) 00:10:33.334 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.334 fio-3.35 00:10:33.334 Starting 1 thread 00:10:34.271 00:10:34.271 job0: (groupid=0, jobs=1): err= 0: pid=66027: Fri Nov 15 10:33:59 2024 00:10:34.271 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:34.271 slat (nsec): min=11346, max=44423, avg=13183.97, stdev=3021.23 00:10:34.271 clat (usec): min=139, max=331, avg=177.11, stdev=20.36 00:10:34.271 lat (usec): min=152, max=343, avg=190.29, stdev=20.98 00:10:34.271 clat percentiles (usec): 00:10:34.271 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:10:34.271 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:10:34.271 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 204], 95.00th=[ 217], 00:10:34.271 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 310], 00:10:34.271 | 99.99th=[ 330] 00:10:34.271 write: IOPS=3139, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec); 0 zone resets 00:10:34.271 slat (usec): min=14, max=120, avg=19.59, stdev= 5.09 00:10:34.271 clat (usec): min=84, max=733, avg=109.64, stdev=18.37 00:10:34.271 lat (usec): min=104, max=751, avg=129.23, stdev=20.29 00:10:34.271 clat percentiles (usec): 00:10:34.271 | 1.00th=[ 89], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 98], 00:10:34.271 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 110], 00:10:34.271 | 70.00th=[ 114], 80.00th=[ 121], 90.00th=[ 130], 95.00th=[ 137], 00:10:34.271 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 208], 99.95th=[ 273], 00:10:34.271 | 99.99th=[ 734] 00:10:34.271 bw ( KiB/s): min=12263, max=12263, per=97.64%, avg=12263.00, stdev= 0.00, samples=1 00:10:34.271 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:34.271 lat (usec) : 100=13.52%, 250=85.99%, 500=0.48%, 750=0.02% 00:10:34.271 cpu : usr=2.80%, sys=7.40%, ctx=6215, majf=0, minf=5 00:10:34.271 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.271 issued rwts: total=3072,3143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.271 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.271 00:10:34.271 Run status group 0 (all jobs): 00:10:34.271 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:34.271 WRITE: bw=12.3MiB/s (12.9MB/s), 12.3MiB/s-12.3MiB/s (12.9MB/s-12.9MB/s), io=12.3MiB (12.9MB), run=1001-1001msec 00:10:34.271 00:10:34.271 Disk stats (read/write): 00:10:34.271 nvme0n1: ios=2635/3072, merge=0/0, ticks=482/365, in_queue=847, util=91.28% 00:10:34.271 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:34.271 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.271 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:10:34.271 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:34.271 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.271 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:34.271 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.529 rmmod nvme_tcp 00:10:34.529 rmmod nvme_fabrics 00:10:34.529 rmmod nvme_keyring 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65943 ']' 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65943 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 65943 ']' 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 65943 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65943 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65943' 00:10:34.529 killing process with pid 65943 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 65943 00:10:34.529 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 65943 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:34.788 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:35.046 00:10:35.046 real 0m5.476s 00:10:35.046 user 0m15.991s 00:10:35.046 sys 0m2.380s 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.046 ************************************ 00:10:35.046 END TEST nvmf_nmic 00:10:35.046 ************************************ 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.046 ************************************ 00:10:35.046 START TEST nvmf_fio_target 00:10:35.046 ************************************ 00:10:35.046 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:35.046 * Looking for test storage... 00:10:35.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:35.047 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:35.047 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:35.047 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:35.306 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:35.306 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.306 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.306 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.306 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.306 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.306 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.306 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.306 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.306 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.307 --rc genhtml_branch_coverage=1 00:10:35.307 --rc genhtml_function_coverage=1 00:10:35.307 --rc genhtml_legend=1 00:10:35.307 --rc geninfo_all_blocks=1 00:10:35.307 --rc geninfo_unexecuted_blocks=1 00:10:35.307 00:10:35.307 ' 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.307 --rc genhtml_branch_coverage=1 00:10:35.307 --rc genhtml_function_coverage=1 00:10:35.307 --rc genhtml_legend=1 00:10:35.307 --rc geninfo_all_blocks=1 00:10:35.307 --rc geninfo_unexecuted_blocks=1 00:10:35.307 00:10:35.307 ' 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.307 --rc genhtml_branch_coverage=1 00:10:35.307 --rc genhtml_function_coverage=1 00:10:35.307 --rc genhtml_legend=1 00:10:35.307 --rc geninfo_all_blocks=1 00:10:35.307 --rc geninfo_unexecuted_blocks=1 00:10:35.307 00:10:35.307 ' 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.307 --rc genhtml_branch_coverage=1 00:10:35.307 --rc genhtml_function_coverage=1 00:10:35.307 --rc genhtml_legend=1 00:10:35.307 --rc geninfo_all_blocks=1 00:10:35.307 --rc geninfo_unexecuted_blocks=1 00:10:35.307 00:10:35.307 ' 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.307 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.307 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:35.308 Cannot find device "nvmf_init_br" 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:35.308 Cannot find device "nvmf_init_br2" 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:35.308 Cannot find device "nvmf_tgt_br" 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.308 Cannot find device "nvmf_tgt_br2" 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:35.308 Cannot find device "nvmf_init_br" 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:35.308 Cannot find device "nvmf_init_br2" 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:35.308 Cannot find device "nvmf_tgt_br" 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:35.308 Cannot find device "nvmf_tgt_br2" 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:35.308 Cannot find device "nvmf_br" 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:35.308 Cannot find device "nvmf_init_if" 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:35.308 Cannot find device "nvmf_init_if2" 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:35.308 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:35.567 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:35.567 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:35.567 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:10:35.567 00:10:35.567 --- 10.0.0.3 ping statistics --- 00:10:35.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.567 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:35.567 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:35.567 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:10:35.567 00:10:35.567 --- 10.0.0.4 ping statistics --- 00:10:35.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.567 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:35.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:35.567 00:10:35.567 --- 10.0.0.1 ping statistics --- 00:10:35.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.567 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:35.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:10:35.567 00:10:35.567 --- 10.0.0.2 ping statistics --- 00:10:35.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.567 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:35.567 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.568 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.568 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66262 00:10:35.568 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66262 00:10:35.568 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.568 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 66262 ']' 00:10:35.568 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.568 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:35.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.568 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.568 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:35.568 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.826 [2024-11-15 10:34:01.107509] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:10:35.826 [2024-11-15 10:34:01.107624] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.826 [2024-11-15 10:34:01.264472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.085 [2024-11-15 10:34:01.336196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.085 [2024-11-15 10:34:01.336258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.085 [2024-11-15 10:34:01.336272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.085 [2024-11-15 10:34:01.336283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.085 [2024-11-15 10:34:01.336292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.085 [2024-11-15 10:34:01.337572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.085 [2024-11-15 10:34:01.337653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.085 [2024-11-15 10:34:01.337695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.085 [2024-11-15 10:34:01.337698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.085 [2024-11-15 10:34:01.396358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:36.085 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:36.085 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:10:36.085 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:36.085 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.085 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.085 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.085 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:36.344 [2024-11-15 10:34:01.804240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.344 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.910 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:36.910 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.169 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:37.169 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.429 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:37.429 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.995 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:37.995 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:38.254 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.513 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:38.513 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.771 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:38.771 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.339 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:39.339 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:39.595 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.854 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:39.854 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.113 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:40.113 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:40.371 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:40.630 [2024-11-15 10:34:05.890815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:40.630 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:40.888 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:41.151 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid=50e4d619-cecf-4dd2-989d-1336dee31d8f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:41.151 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:41.151 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:10:41.151 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.151 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:10:41.151 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:10:41.151 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:10:43.681 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:43.681 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:43.681 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.681 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:10:43.681 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.681 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:10:43.681 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:43.681 [global] 00:10:43.681 thread=1 00:10:43.681 invalidate=1 00:10:43.681 rw=write 00:10:43.681 time_based=1 00:10:43.681 runtime=1 00:10:43.681 ioengine=libaio 00:10:43.681 direct=1 00:10:43.681 bs=4096 00:10:43.681 iodepth=1 00:10:43.681 norandommap=0 00:10:43.681 numjobs=1 00:10:43.681 00:10:43.681 verify_dump=1 00:10:43.681 verify_backlog=512 00:10:43.681 verify_state_save=0 00:10:43.681 do_verify=1 00:10:43.681 verify=crc32c-intel 00:10:43.681 [job0] 00:10:43.681 filename=/dev/nvme0n1 00:10:43.681 [job1] 00:10:43.681 filename=/dev/nvme0n2 00:10:43.681 [job2] 00:10:43.681 filename=/dev/nvme0n3 00:10:43.681 [job3] 00:10:43.681 filename=/dev/nvme0n4 00:10:43.681 Could not set queue depth (nvme0n1) 00:10:43.681 Could not set queue depth (nvme0n2) 00:10:43.681 Could not set queue depth (nvme0n3) 00:10:43.681 Could not set queue depth (nvme0n4) 00:10:43.681 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.681 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.681 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.681 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.681 fio-3.35 00:10:43.681 Starting 4 threads 00:10:44.633 00:10:44.633 job0: (groupid=0, jobs=1): err= 0: pid=66444: Fri Nov 15 10:34:09 2024 00:10:44.633 read: IOPS=3021, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec) 00:10:44.633 slat (nsec): min=11350, max=49617, avg=14114.52, stdev=2096.34 00:10:44.633 clat (usec): min=136, max=499, avg=166.95, stdev=17.93 00:10:44.633 lat (usec): min=150, max=512, avg=181.07, stdev=18.56 00:10:44.633 clat percentiles (usec): 00:10:44.633 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:10:44.633 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:10:44.633 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 204], 00:10:44.633 | 99.00th=[ 231], 99.50th=[ 235], 99.90th=[ 249], 99.95th=[ 260], 00:10:44.633 | 99.99th=[ 498] 00:10:44.633 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:44.633 slat (usec): min=14, max=146, avg=21.58, stdev= 4.73 00:10:44.633 clat (usec): min=89, max=2086, avg=122.39, stdev=37.97 00:10:44.633 lat (usec): min=108, max=2105, avg=143.97, stdev=38.45 00:10:44.633 clat percentiles (usec): 00:10:44.633 | 1.00th=[ 100], 5.00th=[ 105], 10.00th=[ 109], 20.00th=[ 114], 00:10:44.633 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 124], 00:10:44.633 | 70.00th=[ 127], 80.00th=[ 131], 90.00th=[ 135], 95.00th=[ 141], 00:10:44.633 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 202], 99.95th=[ 594], 00:10:44.633 | 99.99th=[ 2089] 00:10:44.633 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:44.633 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:44.633 lat (usec) : 100=0.44%, 250=99.48%, 500=0.05%, 750=0.02% 00:10:44.633 lat (msec) : 4=0.02% 00:10:44.633 cpu : usr=2.00%, sys=8.90%, ctx=6098, majf=0, minf=9 00:10:44.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.633 issued rwts: total=3025,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.633 job1: (groupid=0, jobs=1): err= 0: pid=66446: Fri Nov 15 10:34:09 2024 00:10:44.633 read: IOPS=2973, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec) 00:10:44.633 slat (nsec): min=11126, max=41683, avg=14500.37, stdev=2608.98 00:10:44.633 clat (usec): min=135, max=783, avg=167.08, stdev=16.98 00:10:44.633 lat (usec): min=147, max=795, avg=181.58, stdev=17.29 00:10:44.633 clat percentiles (usec): 00:10:44.633 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:10:44.633 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:10:44.633 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 190], 00:10:44.633 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 273], 99.95th=[ 302], 00:10:44.633 | 99.99th=[ 783] 00:10:44.633 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:44.633 slat (usec): min=13, max=155, avg=21.77, stdev= 5.45 00:10:44.633 clat (usec): min=98, max=662, avg=124.43, stdev=18.60 00:10:44.633 lat (usec): min=117, max=689, avg=146.20, stdev=20.00 00:10:44.633 clat percentiles (usec): 00:10:44.633 | 1.00th=[ 104], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 117], 00:10:44.633 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 126], 00:10:44.633 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 143], 00:10:44.633 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 326], 99.95th=[ 562], 00:10:44.633 | 99.99th=[ 660] 00:10:44.633 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:44.633 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:44.633 lat (usec) : 100=0.08%, 250=99.75%, 500=0.10%, 750=0.05%, 1000=0.02% 00:10:44.633 cpu : usr=2.40%, sys=8.70%, ctx=6058, majf=0, minf=10 00:10:44.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.633 issued rwts: total=2976,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.633 job2: (groupid=0, jobs=1): err= 0: pid=66455: Fri Nov 15 10:34:09 2024 00:10:44.633 read: IOPS=1924, BW=7696KiB/s (7881kB/s)(7704KiB/1001msec) 00:10:44.633 slat (nsec): min=8053, max=40587, avg=12873.12, stdev=2583.58 00:10:44.633 clat (usec): min=220, max=731, avg=265.66, stdev=27.08 00:10:44.633 lat (usec): min=234, max=740, avg=278.53, stdev=27.08 00:10:44.633 clat percentiles (usec): 00:10:44.633 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:10:44.633 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:10:44.633 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 289], 00:10:44.633 | 99.00th=[ 326], 99.50th=[ 433], 99.90th=[ 709], 99.95th=[ 734], 00:10:44.633 | 99.99th=[ 734] 00:10:44.633 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:44.633 slat (nsec): min=10604, max=85155, avg=18299.25, stdev=4814.02 00:10:44.633 clat (usec): min=141, max=370, avg=205.11, stdev=14.09 00:10:44.633 lat (usec): min=160, max=396, avg=223.41, stdev=14.64 00:10:44.633 clat percentiles (usec): 00:10:44.633 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 194], 00:10:44.633 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:10:44.633 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 229], 00:10:44.633 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 258], 99.95th=[ 262], 00:10:44.633 | 99.99th=[ 371] 00:10:44.633 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:44.633 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:44.633 lat (usec) : 250=59.08%, 500=40.74%, 750=0.18% 00:10:44.633 cpu : usr=1.20%, sys=5.60%, ctx=3977, majf=0, minf=9 00:10:44.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.633 issued rwts: total=1926,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.633 job3: (groupid=0, jobs=1): err= 0: pid=66456: Fri Nov 15 10:34:09 2024 00:10:44.634 read: IOPS=1925, BW=7700KiB/s (7885kB/s)(7708KiB/1001msec) 00:10:44.634 slat (nsec): min=7876, max=35337, avg=11854.34, stdev=2351.09 00:10:44.634 clat (usec): min=199, max=765, avg=266.57, stdev=26.29 00:10:44.634 lat (usec): min=214, max=782, avg=278.42, stdev=26.45 00:10:44.634 clat percentiles (usec): 00:10:44.634 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 253], 00:10:44.634 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:10:44.634 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 289], 00:10:44.634 | 99.00th=[ 318], 99.50th=[ 412], 99.90th=[ 652], 99.95th=[ 766], 00:10:44.634 | 99.99th=[ 766] 00:10:44.634 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:44.634 slat (nsec): min=8892, max=81972, avg=18451.58, stdev=3708.07 00:10:44.634 clat (usec): min=154, max=363, avg=204.95, stdev=14.04 00:10:44.634 lat (usec): min=173, max=409, avg=223.40, stdev=14.82 00:10:44.634 clat percentiles (usec): 00:10:44.634 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:10:44.634 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:10:44.634 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 231], 00:10:44.634 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 262], 99.95th=[ 265], 00:10:44.634 | 99.99th=[ 363] 00:10:44.634 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:44.634 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:44.634 lat (usec) : 250=57.76%, 500=42.06%, 750=0.15%, 1000=0.03% 00:10:44.634 cpu : usr=1.70%, sys=4.90%, ctx=3977, majf=0, minf=9 00:10:44.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.634 issued rwts: total=1927,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.634 00:10:44.634 Run status group 0 (all jobs): 00:10:44.634 READ: bw=38.5MiB/s (40.3MB/s), 7696KiB/s-11.8MiB/s (7881kB/s-12.4MB/s), io=38.5MiB (40.4MB), run=1001-1001msec 00:10:44.634 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:44.634 00:10:44.634 Disk stats (read/write): 00:10:44.634 nvme0n1: ios=2610/2684, merge=0/0, ticks=480/354, in_queue=834, util=88.48% 00:10:44.634 nvme0n2: ios=2609/2662, merge=0/0, ticks=471/344, in_queue=815, util=88.66% 00:10:44.634 nvme0n3: ios=1536/1918, merge=0/0, ticks=399/375, in_queue=774, util=89.15% 00:10:44.634 nvme0n4: ios=1536/1920, merge=0/0, ticks=395/376, in_queue=771, util=89.71% 00:10:44.634 10:34:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:44.634 [global] 00:10:44.634 thread=1 00:10:44.634 invalidate=1 00:10:44.634 rw=randwrite 00:10:44.634 time_based=1 00:10:44.634 runtime=1 00:10:44.634 ioengine=libaio 00:10:44.634 direct=1 00:10:44.634 bs=4096 00:10:44.634 iodepth=1 00:10:44.634 norandommap=0 00:10:44.634 numjobs=1 00:10:44.634 00:10:44.634 verify_dump=1 00:10:44.634 verify_backlog=512 00:10:44.634 verify_state_save=0 00:10:44.634 do_verify=1 00:10:44.634 verify=crc32c-intel 00:10:44.634 [job0] 00:10:44.634 filename=/dev/nvme0n1 00:10:44.634 [job1] 00:10:44.634 filename=/dev/nvme0n2 00:10:44.634 [job2] 00:10:44.634 filename=/dev/nvme0n3 00:10:44.634 [job3] 00:10:44.634 filename=/dev/nvme0n4 00:10:44.634 Could not set queue depth (nvme0n1) 00:10:44.634 Could not set queue depth (nvme0n2) 00:10:44.634 Could not set queue depth (nvme0n3) 00:10:44.634 Could not set queue depth (nvme0n4) 00:10:44.892 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.892 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.892 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.892 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.892 fio-3.35 00:10:44.892 Starting 4 threads 00:10:46.268 00:10:46.268 job0: (groupid=0, jobs=1): err= 0: pid=66511: Fri Nov 15 10:34:11 2024 00:10:46.268 read: IOPS=1158, BW=4635KiB/s (4747kB/s)(4640KiB/1001msec) 00:10:46.268 slat (nsec): min=10254, max=81619, avg=28435.66, stdev=13260.42 00:10:46.268 clat (usec): min=197, max=1089, avg=435.10, stdev=112.69 00:10:46.268 lat (usec): min=217, max=1131, avg=463.54, stdev=121.06 00:10:46.268 clat percentiles (usec): 00:10:46.268 | 1.00th=[ 285], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 338], 00:10:46.268 | 30.00th=[ 351], 40.00th=[ 375], 50.00th=[ 392], 60.00th=[ 437], 00:10:46.268 | 70.00th=[ 482], 80.00th=[ 529], 90.00th=[ 627], 95.00th=[ 652], 00:10:46.268 | 99.00th=[ 701], 99.50th=[ 734], 99.90th=[ 906], 99.95th=[ 1090], 00:10:46.268 | 99.99th=[ 1090] 00:10:46.268 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:46.268 slat (nsec): min=13473, max=81782, avg=32880.83, stdev=11144.56 00:10:46.268 clat (usec): min=101, max=2488, avg=262.88, stdev=113.88 00:10:46.268 lat (usec): min=125, max=2510, avg=295.76, stdev=120.64 00:10:46.268 clat percentiles (usec): 00:10:46.268 | 1.00th=[ 114], 5.00th=[ 125], 10.00th=[ 141], 20.00th=[ 151], 00:10:46.268 | 30.00th=[ 215], 40.00th=[ 247], 50.00th=[ 265], 60.00th=[ 285], 00:10:46.268 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 420], 95.00th=[ 445], 00:10:46.268 | 99.00th=[ 482], 99.50th=[ 506], 99.90th=[ 873], 99.95th=[ 2474], 00:10:46.268 | 99.99th=[ 2474] 00:10:46.268 bw ( KiB/s): min= 5516, max= 5516, per=19.26%, avg=5516.00, stdev= 0.00, samples=1 00:10:46.268 iops : min= 1379, max= 1379, avg=1379.00, stdev= 0.00, samples=1 00:10:46.268 lat (usec) : 250=23.89%, 500=65.43%, 750=10.42%, 1000=0.19% 00:10:46.268 lat (msec) : 2=0.04%, 4=0.04% 00:10:46.268 cpu : usr=1.50%, sys=7.10%, ctx=2696, majf=0, minf=11 00:10:46.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.268 issued rwts: total=1160,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.268 job1: (groupid=0, jobs=1): err= 0: pid=66512: Fri Nov 15 10:34:11 2024 00:10:46.269 read: IOPS=1513, BW=6054KiB/s (6199kB/s)(6060KiB/1001msec) 00:10:46.269 slat (nsec): min=9595, max=43230, avg=15949.76, stdev=3197.19 00:10:46.269 clat (usec): min=224, max=921, avg=389.97, stdev=110.49 00:10:46.269 lat (usec): min=242, max=937, avg=405.92, stdev=109.67 00:10:46.269 clat percentiles (usec): 00:10:46.269 | 1.00th=[ 260], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 314], 00:10:46.269 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 363], 00:10:46.269 | 70.00th=[ 383], 80.00th=[ 469], 90.00th=[ 586], 95.00th=[ 635], 00:10:46.269 | 99.00th=[ 660], 99.50th=[ 685], 99.90th=[ 898], 99.95th=[ 922], 00:10:46.269 | 99.99th=[ 922] 00:10:46.269 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:46.269 slat (nsec): min=19820, max=85302, avg=23932.06, stdev=4760.62 00:10:46.269 clat (usec): min=152, max=591, avg=222.70, stdev=35.38 00:10:46.269 lat (usec): min=173, max=612, avg=246.63, stdev=36.26 00:10:46.269 clat percentiles (usec): 00:10:46.269 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 190], 00:10:46.269 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 225], 60.00th=[ 237], 00:10:46.269 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 273], 00:10:46.269 | 99.00th=[ 297], 99.50th=[ 326], 99.90th=[ 519], 99.95th=[ 594], 00:10:46.269 | 99.99th=[ 594] 00:10:46.269 bw ( KiB/s): min= 8175, max= 8175, per=28.54%, avg=8175.00, stdev= 0.00, samples=1 00:10:46.269 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:46.269 lat (usec) : 250=40.28%, 500=49.85%, 750=9.67%, 1000=0.20% 00:10:46.269 cpu : usr=1.40%, sys=5.50%, ctx=3051, majf=0, minf=15 00:10:46.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.269 issued rwts: total=1515,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.269 job2: (groupid=0, jobs=1): err= 0: pid=66513: Fri Nov 15 10:34:11 2024 00:10:46.269 read: IOPS=1512, BW=6050KiB/s (6195kB/s)(6056KiB/1001msec) 00:10:46.269 slat (nsec): min=9929, max=50649, avg=16593.58, stdev=7376.40 00:10:46.269 clat (usec): min=231, max=955, avg=389.52, stdev=103.80 00:10:46.269 lat (usec): min=252, max=966, avg=406.11, stdev=109.29 00:10:46.269 clat percentiles (usec): 00:10:46.269 | 1.00th=[ 260], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:10:46.269 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 363], 00:10:46.269 | 70.00th=[ 375], 80.00th=[ 457], 90.00th=[ 570], 95.00th=[ 619], 00:10:46.269 | 99.00th=[ 644], 99.50th=[ 668], 99.90th=[ 906], 99.95th=[ 955], 00:10:46.269 | 99.99th=[ 955] 00:10:46.269 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:46.269 slat (usec): min=12, max=116, avg=17.93, stdev= 4.51 00:10:46.269 clat (usec): min=127, max=605, avg=228.90, stdev=36.61 00:10:46.269 lat (usec): min=173, max=621, avg=246.84, stdev=36.78 00:10:46.269 clat percentiles (usec): 00:10:46.269 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 194], 00:10:46.269 | 30.00th=[ 204], 40.00th=[ 217], 50.00th=[ 233], 60.00th=[ 243], 00:10:46.269 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 285], 00:10:46.269 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 537], 99.95th=[ 603], 00:10:46.269 | 99.99th=[ 603] 00:10:46.269 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:10:46.269 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:46.269 lat (usec) : 250=36.10%, 500=54.13%, 750=9.61%, 1000=0.16% 00:10:46.269 cpu : usr=1.00%, sys=5.00%, ctx=3052, majf=0, minf=9 00:10:46.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.269 issued rwts: total=1514,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.269 job3: (groupid=0, jobs=1): err= 0: pid=66514: Fri Nov 15 10:34:11 2024 00:10:46.269 read: IOPS=2505, BW=9.79MiB/s (10.3MB/s)(9.80MiB/1001msec) 00:10:46.269 slat (nsec): min=11227, max=41199, avg=14465.80, stdev=3531.21 00:10:46.269 clat (usec): min=150, max=733, avg=211.86, stdev=84.67 00:10:46.269 lat (usec): min=163, max=757, avg=226.33, stdev=86.64 00:10:46.269 clat percentiles (usec): 00:10:46.269 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:10:46.269 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:46.269 | 70.00th=[ 186], 80.00th=[ 200], 90.00th=[ 379], 95.00th=[ 412], 00:10:46.269 | 99.00th=[ 433], 99.50th=[ 441], 99.90th=[ 469], 99.95th=[ 478], 00:10:46.269 | 99.99th=[ 734] 00:10:46.269 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:46.269 slat (nsec): min=14398, max=80644, avg=20262.11, stdev=4081.19 00:10:46.269 clat (usec): min=100, max=6410, avg=145.51, stdev=228.77 00:10:46.269 lat (usec): min=120, max=6429, avg=165.77, stdev=229.46 00:10:46.269 clat percentiles (usec): 00:10:46.269 | 1.00th=[ 106], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 117], 00:10:46.269 | 30.00th=[ 121], 40.00th=[ 125], 50.00th=[ 129], 60.00th=[ 135], 00:10:46.269 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 169], 00:10:46.269 | 99.00th=[ 260], 99.50th=[ 322], 99.90th=[ 5014], 99.95th=[ 5407], 00:10:46.269 | 99.99th=[ 6390] 00:10:46.269 bw ( KiB/s): min=12263, max=12263, per=42.81%, avg=12263.00, stdev= 0.00, samples=1 00:10:46.269 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:46.269 lat (usec) : 250=90.04%, 500=9.81%, 750=0.02% 00:10:46.269 lat (msec) : 4=0.08%, 10=0.06% 00:10:46.269 cpu : usr=1.20%, sys=7.80%, ctx=5068, majf=0, minf=13 00:10:46.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.269 issued rwts: total=2508,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.269 00:10:46.269 Run status group 0 (all jobs): 00:10:46.269 READ: bw=26.1MiB/s (27.4MB/s), 4635KiB/s-9.79MiB/s (4747kB/s-10.3MB/s), io=26.2MiB (27.4MB), run=1001-1001msec 00:10:46.269 WRITE: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:10:46.269 00:10:46.269 Disk stats (read/write): 00:10:46.269 nvme0n1: ios=1074/1144, merge=0/0, ticks=488/338, in_queue=826, util=88.48% 00:10:46.269 nvme0n2: ios=1290/1536, merge=0/0, ticks=484/348, in_queue=832, util=89.36% 00:10:46.269 nvme0n3: ios=1248/1536, merge=0/0, ticks=451/306, in_queue=757, util=89.38% 00:10:46.269 nvme0n4: ios=2176/2560, merge=0/0, ticks=412/383, in_queue=795, util=89.02% 00:10:46.269 10:34:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:46.269 [global] 00:10:46.269 thread=1 00:10:46.269 invalidate=1 00:10:46.269 rw=write 00:10:46.269 time_based=1 00:10:46.269 runtime=1 00:10:46.269 ioengine=libaio 00:10:46.269 direct=1 00:10:46.269 bs=4096 00:10:46.269 iodepth=128 00:10:46.269 norandommap=0 00:10:46.269 numjobs=1 00:10:46.269 00:10:46.269 verify_dump=1 00:10:46.269 verify_backlog=512 00:10:46.269 verify_state_save=0 00:10:46.269 do_verify=1 00:10:46.269 verify=crc32c-intel 00:10:46.269 [job0] 00:10:46.269 filename=/dev/nvme0n1 00:10:46.269 [job1] 00:10:46.269 filename=/dev/nvme0n2 00:10:46.269 [job2] 00:10:46.269 filename=/dev/nvme0n3 00:10:46.269 [job3] 00:10:46.269 filename=/dev/nvme0n4 00:10:46.269 Could not set queue depth (nvme0n1) 00:10:46.269 Could not set queue depth (nvme0n2) 00:10:46.269 Could not set queue depth (nvme0n3) 00:10:46.269 Could not set queue depth (nvme0n4) 00:10:46.269 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.269 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.269 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.269 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.269 fio-3.35 00:10:46.269 Starting 4 threads 00:10:47.645 00:10:47.645 job0: (groupid=0, jobs=1): err= 0: pid=66569: Fri Nov 15 10:34:12 2024 00:10:47.645 read: IOPS=5404, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1002msec) 00:10:47.645 slat (usec): min=5, max=3214, avg=86.97, stdev=334.82 00:10:47.645 clat (usec): min=490, max=15044, avg=11479.51, stdev=1236.23 00:10:47.645 lat (usec): min=2014, max=15064, avg=11566.47, stdev=1264.40 00:10:47.645 clat percentiles (usec): 00:10:47.645 | 1.00th=[ 5866], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11076], 00:10:47.646 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:10:47.646 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12649], 95.00th=[13304], 00:10:47.646 | 99.00th=[13960], 99.50th=[14222], 99.90th=[14746], 99.95th=[14877], 00:10:47.646 | 99.99th=[15008] 00:10:47.646 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:47.646 slat (usec): min=10, max=3267, avg=86.05, stdev=373.99 00:10:47.646 clat (usec): min=8357, max=15430, avg=11451.38, stdev=913.92 00:10:47.646 lat (usec): min=8393, max=15477, avg=11537.42, stdev=976.92 00:10:47.646 clat percentiles (usec): 00:10:47.646 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:10:47.646 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:10:47.646 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12256], 95.00th=[13435], 00:10:47.646 | 99.00th=[14484], 99.50th=[14746], 99.90th=[15270], 99.95th=[15270], 00:10:47.646 | 99.99th=[15401] 00:10:47.646 bw ( KiB/s): min=22064, max=23038, per=34.58%, avg=22551.00, stdev=688.72, samples=2 00:10:47.646 iops : min= 5516, max= 5759, avg=5637.50, stdev=171.83, samples=2 00:10:47.646 lat (usec) : 500=0.01% 00:10:47.646 lat (msec) : 4=0.23%, 10=4.35%, 20=95.41% 00:10:47.646 cpu : usr=5.99%, sys=14.79%, ctx=541, majf=0, minf=9 00:10:47.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:47.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.646 issued rwts: total=5415,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.646 job1: (groupid=0, jobs=1): err= 0: pid=66570: Fri Nov 15 10:34:12 2024 00:10:47.646 read: IOPS=5361, BW=20.9MiB/s (22.0MB/s)(21.0MiB/1002msec) 00:10:47.646 slat (usec): min=4, max=3482, avg=88.04, stdev=338.08 00:10:47.646 clat (usec): min=537, max=15078, avg=11295.75, stdev=1277.26 00:10:47.646 lat (usec): min=1996, max=15162, avg=11383.79, stdev=1304.45 00:10:47.646 clat percentiles (usec): 00:10:47.646 | 1.00th=[ 5866], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10945], 00:10:47.646 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:10:47.646 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12649], 95.00th=[13304], 00:10:47.646 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14877], 99.95th=[15008], 00:10:47.646 | 99.99th=[15139] 00:10:47.646 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:47.646 slat (usec): min=11, max=3236, avg=86.01, stdev=329.93 00:10:47.646 clat (usec): min=8251, max=15386, avg=11692.97, stdev=952.15 00:10:47.646 lat (usec): min=8271, max=15434, avg=11778.99, stdev=991.93 00:10:47.646 clat percentiles (usec): 00:10:47.646 | 1.00th=[ 8979], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:10:47.646 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:10:47.646 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12780], 95.00th=[13698], 00:10:47.646 | 99.00th=[14615], 99.50th=[14877], 99.90th=[15139], 99.95th=[15139], 00:10:47.646 | 99.99th=[15401] 00:10:47.646 bw ( KiB/s): min=22256, max=22845, per=34.58%, avg=22550.50, stdev=416.49, samples=2 00:10:47.646 iops : min= 5564, max= 5711, avg=5637.50, stdev=103.94, samples=2 00:10:47.646 lat (usec) : 750=0.01% 00:10:47.646 lat (msec) : 4=0.19%, 10=6.52%, 20=93.28% 00:10:47.646 cpu : usr=4.90%, sys=15.08%, ctx=644, majf=0, minf=9 00:10:47.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:47.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.646 issued rwts: total=5372,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.646 job2: (groupid=0, jobs=1): err= 0: pid=66571: Fri Nov 15 10:34:12 2024 00:10:47.646 read: IOPS=2484, BW=9938KiB/s (10.2MB/s)(9988KiB/1005msec) 00:10:47.646 slat (usec): min=7, max=9270, avg=205.31, stdev=1058.26 00:10:47.646 clat (usec): min=1939, max=32217, avg=25204.70, stdev=3334.14 00:10:47.646 lat (usec): min=7882, max=32230, avg=25410.02, stdev=3195.38 00:10:47.646 clat percentiles (usec): 00:10:47.646 | 1.00th=[ 8291], 5.00th=[19530], 10.00th=[21365], 20.00th=[24773], 00:10:47.646 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:10:47.646 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[28705], 00:10:47.646 | 99.00th=[32113], 99.50th=[32113], 99.90th=[32113], 99.95th=[32113], 00:10:47.646 | 99.99th=[32113] 00:10:47.646 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:10:47.646 slat (usec): min=14, max=7297, avg=182.74, stdev=894.72 00:10:47.646 clat (usec): min=13759, max=31965, avg=24697.81, stdev=2788.05 00:10:47.646 lat (usec): min=18533, max=31985, avg=24880.55, stdev=2631.31 00:10:47.646 clat percentiles (usec): 00:10:47.646 | 1.00th=[18482], 5.00th=[19268], 10.00th=[20317], 20.00th=[22414], 00:10:47.646 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:10:47.646 | 70.00th=[25560], 80.00th=[25822], 90.00th=[27132], 95.00th=[30540], 00:10:47.646 | 99.00th=[31851], 99.50th=[31851], 99.90th=[31851], 99.95th=[31851], 00:10:47.646 | 99.99th=[31851] 00:10:47.646 bw ( KiB/s): min= 9208, max=11294, per=15.72%, avg=10251.00, stdev=1475.02, samples=2 00:10:47.646 iops : min= 2302, max= 2823, avg=2562.50, stdev=368.40, samples=2 00:10:47.646 lat (msec) : 2=0.02%, 10=0.63%, 20=7.20%, 50=92.15% 00:10:47.646 cpu : usr=2.49%, sys=7.87%, ctx=160, majf=0, minf=5 00:10:47.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:47.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.646 issued rwts: total=2497,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.646 job3: (groupid=0, jobs=1): err= 0: pid=66572: Fri Nov 15 10:34:12 2024 00:10:47.646 read: IOPS=2420, BW=9684KiB/s (9916kB/s)(9732KiB/1005msec) 00:10:47.646 slat (usec): min=4, max=9396, avg=201.46, stdev=1043.69 00:10:47.646 clat (usec): min=382, max=33217, avg=26243.52, stdev=3861.40 00:10:47.646 lat (usec): min=6200, max=33254, avg=26444.98, stdev=3719.73 00:10:47.646 clat percentiles (usec): 00:10:47.646 | 1.00th=[ 6652], 5.00th=[20317], 10.00th=[23462], 20.00th=[25297], 00:10:47.646 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26346], 60.00th=[26346], 00:10:47.646 | 70.00th=[26870], 80.00th=[27395], 90.00th=[31589], 95.00th=[32637], 00:10:47.646 | 99.00th=[32900], 99.50th=[33162], 99.90th=[33162], 99.95th=[33162], 00:10:47.646 | 99.99th=[33162] 00:10:47.646 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:10:47.646 slat (usec): min=11, max=12257, avg=192.41, stdev=975.13 00:10:47.646 clat (usec): min=12752, max=31837, avg=24511.59, stdev=2847.69 00:10:47.646 lat (usec): min=16438, max=31851, avg=24704.00, stdev=2710.26 00:10:47.646 clat percentiles (usec): 00:10:47.646 | 1.00th=[17433], 5.00th=[19006], 10.00th=[20579], 20.00th=[22152], 00:10:47.646 | 30.00th=[23987], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:10:47.646 | 70.00th=[25560], 80.00th=[25822], 90.00th=[28181], 95.00th=[28705], 00:10:47.646 | 99.00th=[31851], 99.50th=[31851], 99.90th=[31851], 99.95th=[31851], 00:10:47.646 | 99.99th=[31851] 00:10:47.646 bw ( KiB/s): min=10232, max=10248, per=15.70%, avg=10240.00, stdev=11.31, samples=2 00:10:47.646 iops : min= 2558, max= 2562, avg=2560.00, stdev= 2.83, samples=2 00:10:47.646 lat (usec) : 500=0.02% 00:10:47.646 lat (msec) : 10=0.64%, 20=6.35%, 50=92.99% 00:10:47.646 cpu : usr=2.19%, sys=7.47%, ctx=158, majf=0, minf=10 00:10:47.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:10:47.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.646 issued rwts: total=2433,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.646 00:10:47.646 Run status group 0 (all jobs): 00:10:47.646 READ: bw=61.1MiB/s (64.1MB/s), 9684KiB/s-21.1MiB/s (9916kB/s-22.1MB/s), io=61.4MiB (64.4MB), run=1002-1005msec 00:10:47.646 WRITE: bw=63.7MiB/s (66.8MB/s), 9.95MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=64.0MiB (67.1MB), run=1002-1005msec 00:10:47.646 00:10:47.646 Disk stats (read/write): 00:10:47.646 nvme0n1: ios=4658/4824, merge=0/0, ticks=17033/15664, in_queue=32697, util=87.78% 00:10:47.646 nvme0n2: ios=4642/4820, merge=0/0, ticks=16792/16278, in_queue=33070, util=88.92% 00:10:47.646 nvme0n3: ios=2048/2304, merge=0/0, ticks=12806/12640, in_queue=25446, util=89.12% 00:10:47.646 nvme0n4: ios=2048/2176, merge=0/0, ticks=13173/12198, in_queue=25371, util=89.26% 00:10:47.646 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:47.646 [global] 00:10:47.646 thread=1 00:10:47.646 invalidate=1 00:10:47.646 rw=randwrite 00:10:47.646 time_based=1 00:10:47.646 runtime=1 00:10:47.646 ioengine=libaio 00:10:47.646 direct=1 00:10:47.646 bs=4096 00:10:47.646 iodepth=128 00:10:47.646 norandommap=0 00:10:47.646 numjobs=1 00:10:47.646 00:10:47.646 verify_dump=1 00:10:47.646 verify_backlog=512 00:10:47.646 verify_state_save=0 00:10:47.646 do_verify=1 00:10:47.646 verify=crc32c-intel 00:10:47.646 [job0] 00:10:47.646 filename=/dev/nvme0n1 00:10:47.646 [job1] 00:10:47.646 filename=/dev/nvme0n2 00:10:47.646 [job2] 00:10:47.646 filename=/dev/nvme0n3 00:10:47.646 [job3] 00:10:47.646 filename=/dev/nvme0n4 00:10:47.646 Could not set queue depth (nvme0n1) 00:10:47.646 Could not set queue depth (nvme0n2) 00:10:47.646 Could not set queue depth (nvme0n3) 00:10:47.646 Could not set queue depth (nvme0n4) 00:10:47.646 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.646 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.646 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.646 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.646 fio-3.35 00:10:47.646 Starting 4 threads 00:10:48.649 00:10:48.649 job0: (groupid=0, jobs=1): err= 0: pid=66630: Fri Nov 15 10:34:14 2024 00:10:48.649 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:10:48.649 slat (usec): min=5, max=10095, avg=198.44, stdev=779.94 00:10:48.649 clat (usec): min=14461, max=35502, avg=25350.71, stdev=3295.91 00:10:48.649 lat (usec): min=15654, max=35800, avg=25549.15, stdev=3293.46 00:10:48.649 clat percentiles (usec): 00:10:48.649 | 1.00th=[16712], 5.00th=[19006], 10.00th=[20841], 20.00th=[23462], 00:10:48.649 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25822], 00:10:48.649 | 70.00th=[26608], 80.00th=[27919], 90.00th=[30016], 95.00th=[30802], 00:10:48.649 | 99.00th=[32900], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:10:48.649 | 99.99th=[35390] 00:10:48.649 write: IOPS=2819, BW=11.0MiB/s (11.5MB/s)(11.1MiB/1008msec); 0 zone resets 00:10:48.649 slat (usec): min=10, max=7345, avg=166.06, stdev=722.34 00:10:48.649 clat (usec): min=5539, max=39243, avg=21847.80, stdev=4566.57 00:10:48.649 lat (usec): min=8109, max=39266, avg=22013.86, stdev=4576.15 00:10:48.649 clat percentiles (usec): 00:10:48.649 | 1.00th=[ 8848], 5.00th=[16450], 10.00th=[16712], 20.00th=[17695], 00:10:48.649 | 30.00th=[18744], 40.00th=[20317], 50.00th=[22152], 60.00th=[23200], 00:10:48.649 | 70.00th=[24511], 80.00th=[25560], 90.00th=[26870], 95.00th=[29492], 00:10:48.649 | 99.00th=[33817], 99.50th=[34866], 99.90th=[39060], 99.95th=[39060], 00:10:48.649 | 99.99th=[39060] 00:10:48.649 bw ( KiB/s): min= 9424, max=12288, per=16.75%, avg=10856.00, stdev=2025.15, samples=2 00:10:48.649 iops : min= 2356, max= 3072, avg=2714.00, stdev=506.29, samples=2 00:10:48.649 lat (msec) : 10=0.80%, 20=22.81%, 50=76.40% 00:10:48.649 cpu : usr=2.98%, sys=7.45%, ctx=701, majf=0, minf=9 00:10:48.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:48.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.649 issued rwts: total=2560,2842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.649 job1: (groupid=0, jobs=1): err= 0: pid=66632: Fri Nov 15 10:34:14 2024 00:10:48.649 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:10:48.649 slat (usec): min=7, max=4762, avg=85.77, stdev=338.59 00:10:48.649 clat (usec): min=8399, max=15540, avg=11443.92, stdev=788.16 00:10:48.649 lat (usec): min=8413, max=15563, avg=11529.70, stdev=831.45 00:10:48.649 clat percentiles (usec): 00:10:48.649 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10683], 20.00th=[11076], 00:10:48.649 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:10:48.649 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12649], 95.00th=[13042], 00:10:48.649 | 99.00th=[13698], 99.50th=[14222], 99.90th=[14746], 99.95th=[14746], 00:10:48.649 | 99.99th=[15533] 00:10:48.649 write: IOPS=5719, BW=22.3MiB/s (23.4MB/s)(22.4MiB/1003msec); 0 zone resets 00:10:48.649 slat (usec): min=10, max=3140, avg=82.03, stdev=368.80 00:10:48.649 clat (usec): min=1995, max=15318, avg=10855.51, stdev=1153.93 00:10:48.649 lat (usec): min=2019, max=15350, avg=10937.54, stdev=1206.71 00:10:48.649 clat percentiles (usec): 00:10:48.649 | 1.00th=[ 5866], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:10:48.649 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:10:48.649 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11731], 95.00th=[12780], 00:10:48.649 | 99.00th=[13960], 99.50th=[14222], 99.90th=[14615], 99.95th=[15139], 00:10:48.649 | 99.99th=[15270] 00:10:48.649 bw ( KiB/s): min=20912, max=24144, per=34.76%, avg=22528.00, stdev=2285.37, samples=2 00:10:48.649 iops : min= 5228, max= 6036, avg=5632.00, stdev=571.34, samples=2 00:10:48.649 lat (msec) : 2=0.01%, 4=0.33%, 10=6.03%, 20=93.62% 00:10:48.649 cpu : usr=4.69%, sys=16.07%, ctx=467, majf=0, minf=9 00:10:48.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:48.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.649 issued rwts: total=5632,5737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.649 job2: (groupid=0, jobs=1): err= 0: pid=66633: Fri Nov 15 10:34:14 2024 00:10:48.649 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:10:48.649 slat (usec): min=7, max=8291, avg=197.28, stdev=774.71 00:10:48.649 clat (usec): min=17223, max=35687, avg=25297.98, stdev=2660.60 00:10:48.649 lat (usec): min=18174, max=35720, avg=25495.26, stdev=2657.47 00:10:48.649 clat percentiles (usec): 00:10:48.649 | 1.00th=[18482], 5.00th=[20841], 10.00th=[21890], 20.00th=[23725], 00:10:48.649 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:10:48.649 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28181], 95.00th=[29492], 00:10:48.649 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:10:48.649 | 99.99th=[35914] 00:10:48.649 write: IOPS=2619, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1006msec); 0 zone resets 00:10:48.649 slat (usec): min=7, max=7323, avg=179.69, stdev=756.57 00:10:48.649 clat (usec): min=5551, max=34714, avg=23377.14, stdev=4587.88 00:10:48.649 lat (usec): min=5877, max=34740, avg=23556.84, stdev=4586.96 00:10:48.649 clat percentiles (usec): 00:10:48.649 | 1.00th=[ 7963], 5.00th=[17433], 10.00th=[19006], 20.00th=[20579], 00:10:48.649 | 30.00th=[21103], 40.00th=[22414], 50.00th=[23462], 60.00th=[23987], 00:10:48.649 | 70.00th=[24773], 80.00th=[26346], 90.00th=[28967], 95.00th=[32637], 00:10:48.649 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:10:48.649 | 99.99th=[34866] 00:10:48.649 bw ( KiB/s): min= 8920, max=11560, per=15.80%, avg=10240.00, stdev=1866.76, samples=2 00:10:48.649 iops : min= 2230, max= 2890, avg=2560.00, stdev=466.69, samples=2 00:10:48.649 lat (msec) : 10=1.27%, 20=7.99%, 50=90.74% 00:10:48.649 cpu : usr=2.39%, sys=7.76%, ctx=690, majf=0, minf=15 00:10:48.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:48.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.649 issued rwts: total=2560,2635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.649 job3: (groupid=0, jobs=1): err= 0: pid=66634: Fri Nov 15 10:34:14 2024 00:10:48.649 read: IOPS=5014, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1002msec) 00:10:48.649 slat (usec): min=4, max=3631, avg=96.09, stdev=451.46 00:10:48.649 clat (usec): min=338, max=14217, avg=12653.39, stdev=1114.59 00:10:48.649 lat (usec): min=2896, max=14243, avg=12749.49, stdev=1021.65 00:10:48.649 clat percentiles (usec): 00:10:48.649 | 1.00th=[ 6521], 5.00th=[12125], 10.00th=[12387], 20.00th=[12518], 00:10:48.649 | 30.00th=[12649], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:10:48.649 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13566], 00:10:48.649 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14222], 99.95th=[14222], 00:10:48.649 | 99.99th=[14222] 00:10:48.649 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:48.649 slat (usec): min=10, max=2907, avg=92.86, stdev=394.66 00:10:48.649 clat (usec): min=9247, max=13272, avg=12289.51, stdev=527.85 00:10:48.649 lat (usec): min=10566, max=13417, avg=12382.37, stdev=349.42 00:10:48.649 clat percentiles (usec): 00:10:48.649 | 1.00th=[ 9896], 5.00th=[11600], 10.00th=[11863], 20.00th=[11994], 00:10:48.649 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:10:48.649 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12780], 95.00th=[12911], 00:10:48.649 | 99.00th=[13042], 99.50th=[13042], 99.90th=[13304], 99.95th=[13304], 00:10:48.649 | 99.99th=[13304] 00:10:48.649 bw ( KiB/s): min=20480, max=20480, per=31.60%, avg=20480.00, stdev= 0.00, samples=2 00:10:48.649 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:48.649 lat (usec) : 500=0.01% 00:10:48.649 lat (msec) : 4=0.32%, 10=1.55%, 20=98.13% 00:10:48.649 cpu : usr=5.79%, sys=13.29%, ctx=320, majf=0, minf=10 00:10:48.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:48.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.649 issued rwts: total=5025,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.649 00:10:48.649 Run status group 0 (all jobs): 00:10:48.649 READ: bw=61.1MiB/s (64.1MB/s), 9.92MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=61.6MiB (64.6MB), run=1002-1008msec 00:10:48.649 WRITE: bw=63.3MiB/s (66.4MB/s), 10.2MiB/s-22.3MiB/s (10.7MB/s-23.4MB/s), io=63.8MiB (66.9MB), run=1002-1008msec 00:10:48.649 00:10:48.649 Disk stats (read/write): 00:10:48.649 nvme0n1: ios=2098/2560, merge=0/0, ticks=16998/16628, in_queue=33626, util=87.68% 00:10:48.649 nvme0n2: ios=4688/5120, merge=0/0, ticks=16787/15391, in_queue=32178, util=89.38% 00:10:48.649 nvme0n3: ios=2053/2368, merge=0/0, ticks=16285/17126, in_queue=33411, util=88.67% 00:10:48.649 nvme0n4: ios=4128/4608, merge=0/0, ticks=11839/12068, in_queue=23907, util=89.71% 00:10:48.649 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:48.649 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66647 00:10:48.649 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:48.649 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:48.908 [global] 00:10:48.908 thread=1 00:10:48.908 invalidate=1 00:10:48.908 rw=read 00:10:48.908 time_based=1 00:10:48.908 runtime=10 00:10:48.908 ioengine=libaio 00:10:48.908 direct=1 00:10:48.908 bs=4096 00:10:48.908 iodepth=1 00:10:48.908 norandommap=1 00:10:48.908 numjobs=1 00:10:48.908 00:10:48.908 [job0] 00:10:48.908 filename=/dev/nvme0n1 00:10:48.908 [job1] 00:10:48.908 filename=/dev/nvme0n2 00:10:48.908 [job2] 00:10:48.908 filename=/dev/nvme0n3 00:10:48.908 [job3] 00:10:48.908 filename=/dev/nvme0n4 00:10:48.908 Could not set queue depth (nvme0n1) 00:10:48.908 Could not set queue depth (nvme0n2) 00:10:48.908 Could not set queue depth (nvme0n3) 00:10:48.908 Could not set queue depth (nvme0n4) 00:10:48.908 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.908 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.908 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.908 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.908 fio-3.35 00:10:48.908 Starting 4 threads 00:10:52.207 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:52.207 fio: pid=66694, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:52.207 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=34267136, buflen=4096 00:10:52.207 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:52.466 fio: pid=66693, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:52.466 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=70414336, buflen=4096 00:10:52.466 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.466 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:52.724 fio: pid=66691, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:52.724 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=47788032, buflen=4096 00:10:52.724 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.724 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:52.982 fio: pid=66692, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:52.982 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52326400, buflen=4096 00:10:52.982 00:10:52.982 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66691: Fri Nov 15 10:34:18 2024 00:10:52.982 read: IOPS=3252, BW=12.7MiB/s (13.3MB/s)(45.6MiB/3587msec) 00:10:52.982 slat (usec): min=10, max=9113, avg=21.83, stdev=140.61 00:10:52.982 clat (usec): min=82, max=3877, avg=283.60, stdev=100.52 00:10:52.982 lat (usec): min=145, max=9282, avg=305.43, stdev=172.51 00:10:52.982 clat percentiles (usec): 00:10:52.982 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 161], 00:10:52.982 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 326], 00:10:52.982 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 347], 95.00th=[ 355], 00:10:52.982 | 99.00th=[ 388], 99.50th=[ 445], 99.90th=[ 775], 99.95th=[ 1139], 00:10:52.982 | 99.99th=[ 3130] 00:10:52.982 bw ( KiB/s): min=11224, max=11472, per=21.98%, avg=11348.00, stdev=80.76, samples=6 00:10:52.982 iops : min= 2806, max= 2868, avg=2837.00, stdev=20.19, samples=6 00:10:52.982 lat (usec) : 100=0.01%, 250=28.12%, 500=71.49%, 750=0.27%, 1000=0.04% 00:10:52.982 lat (msec) : 2=0.03%, 4=0.03% 00:10:52.982 cpu : usr=1.28%, sys=5.44%, ctx=11676, majf=0, minf=1 00:10:52.982 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.982 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.982 issued rwts: total=11668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.982 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.982 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66692: Fri Nov 15 10:34:18 2024 00:10:52.982 read: IOPS=3297, BW=12.9MiB/s (13.5MB/s)(49.9MiB/3874msec) 00:10:52.982 slat (usec): min=7, max=9705, avg=18.47, stdev=173.35 00:10:52.982 clat (usec): min=4, max=14309, avg=283.26, stdev=152.98 00:10:52.982 lat (usec): min=144, max=14323, avg=301.73, stdev=233.25 00:10:52.982 clat percentiles (usec): 00:10:52.982 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 159], 00:10:52.982 | 30.00th=[ 239], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 330], 00:10:52.982 | 70.00th=[ 338], 80.00th=[ 343], 90.00th=[ 355], 95.00th=[ 363], 00:10:52.982 | 99.00th=[ 429], 99.50th=[ 474], 99.90th=[ 619], 99.95th=[ 1090], 00:10:52.982 | 99.99th=[ 2147] 00:10:52.982 bw ( KiB/s): min=10368, max=18834, per=23.80%, avg=12289.43, stdev=2910.28, samples=7 00:10:52.982 iops : min= 2592, max= 4708, avg=3072.29, stdev=727.38, samples=7 00:10:52.982 lat (usec) : 10=0.01%, 250=32.41%, 500=67.24%, 750=0.28% 00:10:52.982 lat (msec) : 2=0.04%, 4=0.01%, 20=0.01% 00:10:52.982 cpu : usr=1.03%, sys=4.57%, ctx=12785, majf=0, minf=2 00:10:52.982 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.982 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.982 issued rwts: total=12776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.982 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.982 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66693: Fri Nov 15 10:34:18 2024 00:10:52.982 read: IOPS=5254, BW=20.5MiB/s (21.5MB/s)(67.2MiB/3272msec) 00:10:52.982 slat (usec): min=8, max=12772, avg=14.77, stdev=113.72 00:10:52.982 clat (usec): min=125, max=2111, avg=174.45, stdev=34.47 00:10:52.982 lat (usec): min=151, max=13067, avg=189.22, stdev=119.98 00:10:52.982 clat percentiles (usec): 00:10:52.982 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:10:52.982 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:10:52.982 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 233], 00:10:52.982 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 330], 99.95th=[ 586], 00:10:52.982 | 99.99th=[ 1795] 00:10:52.982 bw ( KiB/s): min=20560, max=22280, per=41.97%, avg=21665.33, stdev=639.63, samples=6 00:10:52.982 iops : min= 5140, max= 5570, avg=5416.33, stdev=159.91, samples=6 00:10:52.982 lat (usec) : 250=97.77%, 500=2.18%, 750=0.01%, 1000=0.01% 00:10:52.982 lat (msec) : 2=0.02%, 4=0.01% 00:10:52.982 cpu : usr=1.31%, sys=6.27%, ctx=17198, majf=0, minf=1 00:10:52.982 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.982 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.982 issued rwts: total=17192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.982 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.982 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66694: Fri Nov 15 10:34:18 2024 00:10:52.982 read: IOPS=2816, BW=11.0MiB/s (11.5MB/s)(32.7MiB/2971msec) 00:10:52.982 slat (nsec): min=8036, max=79435, avg=18405.21, stdev=5081.67 00:10:52.982 clat (usec): min=188, max=1573, avg=334.87, stdev=28.06 00:10:52.982 lat (usec): min=210, max=1605, avg=353.27, stdev=28.75 00:10:52.982 clat percentiles (usec): 00:10:52.982 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 322], 00:10:52.982 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 334], 00:10:52.982 | 70.00th=[ 343], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 367], 00:10:52.982 | 99.00th=[ 441], 99.50th=[ 469], 99.90th=[ 578], 99.95th=[ 619], 00:10:52.982 | 99.99th=[ 1582] 00:10:52.982 bw ( KiB/s): min=11272, max=11432, per=22.01%, avg=11364.80, stdev=66.60, samples=5 00:10:52.982 iops : min= 2818, max= 2858, avg=2841.20, stdev=16.65, samples=5 00:10:52.982 lat (usec) : 250=0.22%, 500=99.45%, 750=0.31% 00:10:52.982 lat (msec) : 2=0.01% 00:10:52.982 cpu : usr=1.14%, sys=5.19%, ctx=8368, majf=0, minf=2 00:10:52.982 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.982 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.982 issued rwts: total=8367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.982 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.982 00:10:52.982 Run status group 0 (all jobs): 00:10:52.982 READ: bw=50.4MiB/s (52.9MB/s), 11.0MiB/s-20.5MiB/s (11.5MB/s-21.5MB/s), io=195MiB (205MB), run=2971-3874msec 00:10:52.982 00:10:52.982 Disk stats (read/write): 00:10:52.982 nvme0n1: ios=10413/0, merge=0/0, ticks=3158/0, in_queue=3158, util=95.74% 00:10:52.982 nvme0n2: ios=11323/0, merge=0/0, ticks=3227/0, in_queue=3227, util=95.77% 00:10:52.982 nvme0n3: ios=16626/0, merge=0/0, ticks=2884/0, in_queue=2884, util=96.34% 00:10:52.982 nvme0n4: ios=8115/0, merge=0/0, ticks=2642/0, in_queue=2642, util=96.80% 00:10:52.982 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.983 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:53.335 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.335 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:53.608 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.608 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:53.867 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.867 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:54.125 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.125 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:54.384 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:54.384 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66647 00:10:54.384 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:54.384 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.384 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.384 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:54.384 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:54.384 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.384 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.384 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:54.642 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:54.642 nvmf hotplug test: fio failed as expected 00:10:54.642 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:54.642 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:54.642 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.900 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.901 rmmod nvme_tcp 00:10:54.901 rmmod nvme_fabrics 00:10:54.901 rmmod nvme_keyring 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66262 ']' 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66262 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 66262 ']' 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 66262 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66262 00:10:54.901 killing process with pid 66262 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66262' 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 66262 00:10:54.901 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 66262 00:10:55.159 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.159 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:55.159 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:55.159 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:55.159 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:55.159 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:55.159 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:55.159 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.159 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:55.159 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:55.160 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:55.160 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:55.160 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:55.160 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:55.160 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:55.160 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:55.160 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:55.160 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:55.160 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:55.160 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:55.160 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:55.160 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:55.419 ************************************ 00:10:55.419 END TEST nvmf_fio_target 00:10:55.419 ************************************ 00:10:55.419 00:10:55.419 real 0m20.275s 00:10:55.419 user 1m16.953s 00:10:55.419 sys 0m10.041s 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:55.419 ************************************ 00:10:55.419 START TEST nvmf_bdevio 00:10:55.419 ************************************ 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:55.419 * Looking for test storage... 00:10:55.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:55.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.419 --rc genhtml_branch_coverage=1 00:10:55.419 --rc genhtml_function_coverage=1 00:10:55.419 --rc genhtml_legend=1 00:10:55.419 --rc geninfo_all_blocks=1 00:10:55.419 --rc geninfo_unexecuted_blocks=1 00:10:55.419 00:10:55.419 ' 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:55.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.419 --rc genhtml_branch_coverage=1 00:10:55.419 --rc genhtml_function_coverage=1 00:10:55.419 --rc genhtml_legend=1 00:10:55.419 --rc geninfo_all_blocks=1 00:10:55.419 --rc geninfo_unexecuted_blocks=1 00:10:55.419 00:10:55.419 ' 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:55.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.419 --rc genhtml_branch_coverage=1 00:10:55.419 --rc genhtml_function_coverage=1 00:10:55.419 --rc genhtml_legend=1 00:10:55.419 --rc geninfo_all_blocks=1 00:10:55.419 --rc geninfo_unexecuted_blocks=1 00:10:55.419 00:10:55.419 ' 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:55.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.419 --rc genhtml_branch_coverage=1 00:10:55.419 --rc genhtml_function_coverage=1 00:10:55.419 --rc genhtml_legend=1 00:10:55.419 --rc geninfo_all_blocks=1 00:10:55.419 --rc geninfo_unexecuted_blocks=1 00:10:55.419 00:10:55.419 ' 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.419 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.679 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:55.679 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:55.680 Cannot find device "nvmf_init_br" 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:55.680 Cannot find device "nvmf_init_br2" 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:55.680 Cannot find device "nvmf_tgt_br" 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:55.680 Cannot find device "nvmf_tgt_br2" 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:55.680 Cannot find device "nvmf_init_br" 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:55.680 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:55.680 Cannot find device "nvmf_init_br2" 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:55.680 Cannot find device "nvmf_tgt_br" 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:55.680 Cannot find device "nvmf_tgt_br2" 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:55.680 Cannot find device "nvmf_br" 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:55.680 Cannot find device "nvmf_init_if" 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:55.680 Cannot find device "nvmf_init_if2" 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:55.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:55.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:55.680 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:55.939 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:55.939 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:10:55.939 00:10:55.939 --- 10.0.0.3 ping statistics --- 00:10:55.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.939 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:55.939 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:55.939 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:10:55.939 00:10:55.939 --- 10.0.0.4 ping statistics --- 00:10:55.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.939 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:55.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:55.939 00:10:55.939 --- 10.0.0.1 ping statistics --- 00:10:55.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.939 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:55.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:10:55.939 00:10:55.939 --- 10.0.0.2 ping statistics --- 00:10:55.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.939 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67015 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67015 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 67015 ']' 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:55.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:55.939 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.939 [2024-11-15 10:34:21.396707] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:10:55.939 [2024-11-15 10:34:21.396827] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.198 [2024-11-15 10:34:21.545152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.198 [2024-11-15 10:34:21.612251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.198 [2024-11-15 10:34:21.612318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.198 [2024-11-15 10:34:21.612330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.198 [2024-11-15 10:34:21.612338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.198 [2024-11-15 10:34:21.612345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.198 [2024-11-15 10:34:21.613698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:56.198 [2024-11-15 10:34:21.613784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:56.198 [2024-11-15 10:34:21.613872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.198 [2024-11-15 10:34:21.613872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:56.198 [2024-11-15 10:34:21.669334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.456 [2024-11-15 10:34:21.789145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.456 Malloc0 00:10:56.456 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.457 [2024-11-15 10:34:21.859462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:56.457 { 00:10:56.457 "params": { 00:10:56.457 "name": "Nvme$subsystem", 00:10:56.457 "trtype": "$TEST_TRANSPORT", 00:10:56.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:56.457 "adrfam": "ipv4", 00:10:56.457 "trsvcid": "$NVMF_PORT", 00:10:56.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:56.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:56.457 "hdgst": ${hdgst:-false}, 00:10:56.457 "ddgst": ${ddgst:-false} 00:10:56.457 }, 00:10:56.457 "method": "bdev_nvme_attach_controller" 00:10:56.457 } 00:10:56.457 EOF 00:10:56.457 )") 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:56.457 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:56.457 "params": { 00:10:56.457 "name": "Nvme1", 00:10:56.457 "trtype": "tcp", 00:10:56.457 "traddr": "10.0.0.3", 00:10:56.457 "adrfam": "ipv4", 00:10:56.457 "trsvcid": "4420", 00:10:56.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:56.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:56.457 "hdgst": false, 00:10:56.457 "ddgst": false 00:10:56.457 }, 00:10:56.457 "method": "bdev_nvme_attach_controller" 00:10:56.457 }' 00:10:56.457 [2024-11-15 10:34:21.921850] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:10:56.457 [2024-11-15 10:34:21.921950] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67038 ] 00:10:56.715 [2024-11-15 10:34:22.076198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:56.715 [2024-11-15 10:34:22.149131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.715 [2024-11-15 10:34:22.149187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.715 [2024-11-15 10:34:22.149192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.974 [2024-11-15 10:34:22.215327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.974 I/O targets: 00:10:56.974 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:56.974 00:10:56.974 00:10:56.974 CUnit - A unit testing framework for C - Version 2.1-3 00:10:56.974 http://cunit.sourceforge.net/ 00:10:56.974 00:10:56.974 00:10:56.974 Suite: bdevio tests on: Nvme1n1 00:10:56.974 Test: blockdev write read block ...passed 00:10:56.974 Test: blockdev write zeroes read block ...passed 00:10:56.974 Test: blockdev write zeroes read no split ...passed 00:10:56.974 Test: blockdev write zeroes read split ...passed 00:10:56.974 Test: blockdev write zeroes read split partial ...passed 00:10:56.974 Test: blockdev reset ...[2024-11-15 10:34:22.369842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:56.974 [2024-11-15 10:34:22.369954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce0180 (9): Bad file descriptor 00:10:56.974 passed 00:10:56.974 Test: blockdev write read 8 blocks ...[2024-11-15 10:34:22.385853] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:56.974 passed 00:10:56.974 Test: blockdev write read size > 128k ...passed 00:10:56.974 Test: blockdev write read invalid size ...passed 00:10:56.974 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:56.974 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:56.974 Test: blockdev write read max offset ...passed 00:10:56.974 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:56.974 Test: blockdev writev readv 8 blocks ...passed 00:10:56.974 Test: blockdev writev readv 30 x 1block ...passed 00:10:56.974 Test: blockdev writev readv block ...passed 00:10:56.974 Test: blockdev writev readv size > 128k ...passed 00:10:56.974 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:56.974 Test: blockdev comparev and writev ...[2024-11-15 10:34:22.395646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.974 [2024-11-15 10:34:22.395695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:56.974 [2024-11-15 10:34:22.395721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.975 [2024-11-15 10:34:22.395735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:56.975 passed 00:10:56.975 Test: blockdev nvme passthru rw ...[2024-11-15 10:34:22.396041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.975 [2024-11-15 10:34:22.396070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:56.975 [2024-11-15 10:34:22.396091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.975 [2024-11-15 10:34:22.396104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:56.975 [2024-11-15 10:34:22.396394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.975 [2024-11-15 10:34:22.396414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:56.975 [2024-11-15 10:34:22.396434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.975 [2024-11-15 10:34:22.396447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:56.975 [2024-11-15 10:34:22.396750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.975 [2024-11-15 10:34:22.396770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:56.975 [2024-11-15 10:34:22.396791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.975 [2024-11-15 10:34:22.396803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:56.975 passed 00:10:56.975 Test: blockdev nvme passthru vendor specific ...passed 00:10:56.975 Test: blockdev nvme admin passthru ...[2024-11-15 10:34:22.397892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.975 [2024-11-15 10:34:22.397935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:56.975 [2024-11-15 10:34:22.398079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.975 [2024-11-15 10:34:22.398099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:56.975 [2024-11-15 10:34:22.398216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.975 [2024-11-15 10:34:22.398235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:56.975 [2024-11-15 10:34:22.398347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.975 [2024-11-15 10:34:22.398366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:56.975 passed 00:10:56.975 Test: blockdev copy ...passed 00:10:56.975 00:10:56.975 Run Summary: Type Total Ran Passed Failed Inactive 00:10:56.975 suites 1 1 n/a 0 0 00:10:56.975 tests 23 23 23 0 0 00:10:56.975 asserts 152 152 152 0 n/a 00:10:56.975 00:10:56.975 Elapsed time = 0.145 seconds 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.234 rmmod nvme_tcp 00:10:57.234 rmmod nvme_fabrics 00:10:57.234 rmmod nvme_keyring 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67015 ']' 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67015 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 67015 ']' 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 67015 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:57.234 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67015 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:57.492 killing process with pid 67015 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67015' 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 67015 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 67015 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:57.492 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:57.752 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:57.752 00:10:57.752 real 0m2.482s 00:10:57.752 user 0m6.788s 00:10:57.752 sys 0m0.843s 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:57.752 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.752 ************************************ 00:10:57.752 END TEST nvmf_bdevio 00:10:57.752 ************************************ 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:58.011 00:10:58.011 real 2m35.536s 00:10:58.011 user 6m48.948s 00:10:58.011 sys 0m52.146s 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:58.011 ************************************ 00:10:58.011 END TEST nvmf_target_core 00:10:58.011 ************************************ 00:10:58.011 10:34:23 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:58.011 10:34:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:58.011 10:34:23 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:58.011 10:34:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.011 ************************************ 00:10:58.011 START TEST nvmf_target_extra 00:10:58.011 ************************************ 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:58.011 * Looking for test storage... 00:10:58.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.011 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.012 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:58.012 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:58.012 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.012 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:58.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.271 --rc genhtml_branch_coverage=1 00:10:58.271 --rc genhtml_function_coverage=1 00:10:58.271 --rc genhtml_legend=1 00:10:58.271 --rc geninfo_all_blocks=1 00:10:58.271 --rc geninfo_unexecuted_blocks=1 00:10:58.271 00:10:58.271 ' 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:58.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.271 --rc genhtml_branch_coverage=1 00:10:58.271 --rc genhtml_function_coverage=1 00:10:58.271 --rc genhtml_legend=1 00:10:58.271 --rc geninfo_all_blocks=1 00:10:58.271 --rc geninfo_unexecuted_blocks=1 00:10:58.271 00:10:58.271 ' 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:58.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.271 --rc genhtml_branch_coverage=1 00:10:58.271 --rc genhtml_function_coverage=1 00:10:58.271 --rc genhtml_legend=1 00:10:58.271 --rc geninfo_all_blocks=1 00:10:58.271 --rc geninfo_unexecuted_blocks=1 00:10:58.271 00:10:58.271 ' 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:58.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.271 --rc genhtml_branch_coverage=1 00:10:58.271 --rc genhtml_function_coverage=1 00:10:58.271 --rc genhtml_legend=1 00:10:58.271 --rc geninfo_all_blocks=1 00:10:58.271 --rc geninfo_unexecuted_blocks=1 00:10:58.271 00:10:58.271 ' 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:58.271 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:58.271 10:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:58.272 ************************************ 00:10:58.272 START TEST nvmf_auth_target 00:10:58.272 ************************************ 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:58.272 * Looking for test storage... 00:10:58.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:58.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.272 --rc genhtml_branch_coverage=1 00:10:58.272 --rc genhtml_function_coverage=1 00:10:58.272 --rc genhtml_legend=1 00:10:58.272 --rc geninfo_all_blocks=1 00:10:58.272 --rc geninfo_unexecuted_blocks=1 00:10:58.272 00:10:58.272 ' 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:58.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.272 --rc genhtml_branch_coverage=1 00:10:58.272 --rc genhtml_function_coverage=1 00:10:58.272 --rc genhtml_legend=1 00:10:58.272 --rc geninfo_all_blocks=1 00:10:58.272 --rc geninfo_unexecuted_blocks=1 00:10:58.272 00:10:58.272 ' 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:58.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.272 --rc genhtml_branch_coverage=1 00:10:58.272 --rc genhtml_function_coverage=1 00:10:58.272 --rc genhtml_legend=1 00:10:58.272 --rc geninfo_all_blocks=1 00:10:58.272 --rc geninfo_unexecuted_blocks=1 00:10:58.272 00:10:58.272 ' 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:58.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.272 --rc genhtml_branch_coverage=1 00:10:58.272 --rc genhtml_function_coverage=1 00:10:58.272 --rc genhtml_legend=1 00:10:58.272 --rc geninfo_all_blocks=1 00:10:58.272 --rc geninfo_unexecuted_blocks=1 00:10:58.272 00:10:58.272 ' 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.272 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:58.273 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:58.273 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:58.531 Cannot find device "nvmf_init_br" 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:58.531 Cannot find device "nvmf_init_br2" 00:10:58.531 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:58.532 Cannot find device "nvmf_tgt_br" 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:58.532 Cannot find device "nvmf_tgt_br2" 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:58.532 Cannot find device "nvmf_init_br" 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:58.532 Cannot find device "nvmf_init_br2" 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:58.532 Cannot find device "nvmf_tgt_br" 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:58.532 Cannot find device "nvmf_tgt_br2" 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:58.532 Cannot find device "nvmf_br" 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:58.532 Cannot find device "nvmf_init_if" 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:58.532 Cannot find device "nvmf_init_if2" 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:58.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:58.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:58.532 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:58.532 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:58.532 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:58.532 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:58.532 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:58.790 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:58.790 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:10:58.790 00:10:58.790 --- 10.0.0.3 ping statistics --- 00:10:58.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.790 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:10:58.790 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:58.790 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:58.790 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:10:58.790 00:10:58.790 --- 10.0.0.4 ping statistics --- 00:10:58.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.790 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:58.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:10:58.791 00:10:58.791 --- 10.0.0.1 ping statistics --- 00:10:58.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.791 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:58.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:10:58.791 00:10:58.791 --- 10.0.0.2 ping statistics --- 00:10:58.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.791 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67331 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67331 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67331 ']' 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:58.791 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67350 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=096f11ce6b6c025d5481bd54d6d9801882b8254cf005020f 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.V9i 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 096f11ce6b6c025d5481bd54d6d9801882b8254cf005020f 0 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 096f11ce6b6c025d5481bd54d6d9801882b8254cf005020f 0 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=096f11ce6b6c025d5481bd54d6d9801882b8254cf005020f 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.V9i 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.V9i 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.V9i 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=72150d3dd9197f2456fc6f50b166c117aed387a41272a93695c65808b1e2c1bc 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.6DO 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 72150d3dd9197f2456fc6f50b166c117aed387a41272a93695c65808b1e2c1bc 3 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 72150d3dd9197f2456fc6f50b166c117aed387a41272a93695c65808b1e2c1bc 3 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=72150d3dd9197f2456fc6f50b166c117aed387a41272a93695c65808b1e2c1bc 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.6DO 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.6DO 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.6DO 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bd72a8ecd9dcdd6cd370d62439ceee96 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.o62 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bd72a8ecd9dcdd6cd370d62439ceee96 1 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bd72a8ecd9dcdd6cd370d62439ceee96 1 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bd72a8ecd9dcdd6cd370d62439ceee96 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.o62 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.o62 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.o62 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ed79ececf52666110020bd9182634d4e81ba88bedc7defff 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.B2q 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ed79ececf52666110020bd9182634d4e81ba88bedc7defff 2 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ed79ececf52666110020bd9182634d4e81ba88bedc7defff 2 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ed79ececf52666110020bd9182634d4e81ba88bedc7defff 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:59.360 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.B2q 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.B2q 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.B2q 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=51e87b8520a92106b7bc209adeaff606f78862f80c1e99a0 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.R5t 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 51e87b8520a92106b7bc209adeaff606f78862f80c1e99a0 2 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 51e87b8520a92106b7bc209adeaff606f78862f80c1e99a0 2 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=51e87b8520a92106b7bc209adeaff606f78862f80c1e99a0 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.R5t 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.R5t 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.R5t 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a95dbee3c0b4c001b4b1fdc5a1472b28 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.AWI 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a95dbee3c0b4c001b4b1fdc5a1472b28 1 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a95dbee3c0b4c001b4b1fdc5a1472b28 1 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a95dbee3c0b4c001b4b1fdc5a1472b28 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:59.620 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.AWI 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.AWI 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.AWI 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6a94157fc63f9bdb5436dad3435a83b03b6ead9d24a0efd918e544120cd08cc3 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.zqo 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6a94157fc63f9bdb5436dad3435a83b03b6ead9d24a0efd918e544120cd08cc3 3 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6a94157fc63f9bdb5436dad3435a83b03b6ead9d24a0efd918e544120cd08cc3 3 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6a94157fc63f9bdb5436dad3435a83b03b6ead9d24a0efd918e544120cd08cc3 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.zqo 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.zqo 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.zqo 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67331 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67331 ']' 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:59.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:59.620 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.878 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:59.878 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:10:59.878 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67350 /var/tmp/host.sock 00:10:59.879 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67350 ']' 00:10:59.879 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:10:59.879 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:59.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:59.879 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:59.879 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:59.879 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.138 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:00.138 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:11:00.138 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:00.138 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.138 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.138 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.138 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:00.138 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.V9i 00:11:00.138 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.138 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.138 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.138 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.V9i 00:11:00.138 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.V9i 00:11:00.396 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.6DO ]] 00:11:00.396 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6DO 00:11:00.396 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.396 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.655 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.655 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6DO 00:11:00.655 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6DO 00:11:00.913 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:00.913 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.o62 00:11:00.913 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.913 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.913 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.913 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.o62 00:11:00.913 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.o62 00:11:01.172 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.B2q ]] 00:11:01.172 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.B2q 00:11:01.172 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.172 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.172 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.172 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.B2q 00:11:01.172 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.B2q 00:11:01.430 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:01.430 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.R5t 00:11:01.430 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.430 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.430 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.430 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.R5t 00:11:01.430 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.R5t 00:11:01.688 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.AWI ]] 00:11:01.688 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AWI 00:11:01.688 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.688 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.688 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.688 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AWI 00:11:01.688 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AWI 00:11:02.255 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:02.255 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.zqo 00:11:02.255 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.255 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.255 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.255 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.zqo 00:11:02.255 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.zqo 00:11:02.255 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:02.255 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:02.255 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:02.255 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.255 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:02.255 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:02.538 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:02.538 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.538 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:02.538 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:02.538 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:02.538 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.538 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.538 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.538 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.797 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.797 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.797 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.797 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.056 00:11:03.056 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.056 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.056 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.314 { 00:11:03.314 "cntlid": 1, 00:11:03.314 "qid": 0, 00:11:03.314 "state": "enabled", 00:11:03.314 "thread": "nvmf_tgt_poll_group_000", 00:11:03.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:03.314 "listen_address": { 00:11:03.314 "trtype": "TCP", 00:11:03.314 "adrfam": "IPv4", 00:11:03.314 "traddr": "10.0.0.3", 00:11:03.314 "trsvcid": "4420" 00:11:03.314 }, 00:11:03.314 "peer_address": { 00:11:03.314 "trtype": "TCP", 00:11:03.314 "adrfam": "IPv4", 00:11:03.314 "traddr": "10.0.0.1", 00:11:03.314 "trsvcid": "45898" 00:11:03.314 }, 00:11:03.314 "auth": { 00:11:03.314 "state": "completed", 00:11:03.314 "digest": "sha256", 00:11:03.314 "dhgroup": "null" 00:11:03.314 } 00:11:03.314 } 00:11:03.314 ]' 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.314 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.881 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:11:03.881 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.144 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.144 00:11:09.144 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:09.144 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.144 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:09.144 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.144 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.144 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.144 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.144 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:09.144 { 00:11:09.144 "cntlid": 3, 00:11:09.144 "qid": 0, 00:11:09.144 "state": "enabled", 00:11:09.144 "thread": "nvmf_tgt_poll_group_000", 00:11:09.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:09.144 "listen_address": { 00:11:09.144 "trtype": "TCP", 00:11:09.144 "adrfam": "IPv4", 00:11:09.144 "traddr": "10.0.0.3", 00:11:09.144 "trsvcid": "4420" 00:11:09.144 }, 00:11:09.144 "peer_address": { 00:11:09.144 "trtype": "TCP", 00:11:09.144 "adrfam": "IPv4", 00:11:09.144 "traddr": "10.0.0.1", 00:11:09.144 "trsvcid": "45920" 00:11:09.144 }, 00:11:09.144 "auth": { 00:11:09.144 "state": "completed", 00:11:09.144 "digest": "sha256", 00:11:09.144 "dhgroup": "null" 00:11:09.144 } 00:11:09.144 } 00:11:09.144 ]' 00:11:09.144 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.144 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.144 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:09.409 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:09.409 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.409 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.409 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.409 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.671 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:11:09.671 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:11:10.238 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.238 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:10.238 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.238 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.238 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.238 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.238 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:10.238 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:10.805 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:10.805 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.805 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:10.805 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:10.805 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:10.805 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.805 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.805 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.805 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.805 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.805 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.805 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.805 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.063 00:11:11.063 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:11.063 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:11.063 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.324 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.324 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.324 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.324 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.324 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.324 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.324 { 00:11:11.324 "cntlid": 5, 00:11:11.324 "qid": 0, 00:11:11.324 "state": "enabled", 00:11:11.324 "thread": "nvmf_tgt_poll_group_000", 00:11:11.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:11.324 "listen_address": { 00:11:11.324 "trtype": "TCP", 00:11:11.324 "adrfam": "IPv4", 00:11:11.324 "traddr": "10.0.0.3", 00:11:11.324 "trsvcid": "4420" 00:11:11.324 }, 00:11:11.324 "peer_address": { 00:11:11.324 "trtype": "TCP", 00:11:11.324 "adrfam": "IPv4", 00:11:11.324 "traddr": "10.0.0.1", 00:11:11.324 "trsvcid": "45938" 00:11:11.324 }, 00:11:11.324 "auth": { 00:11:11.324 "state": "completed", 00:11:11.324 "digest": "sha256", 00:11:11.324 "dhgroup": "null" 00:11:11.324 } 00:11:11.324 } 00:11:11.324 ]' 00:11:11.324 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.324 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.324 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.586 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:11.586 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.586 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.586 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.586 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.845 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:11:11.845 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:11:12.412 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.412 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:12.412 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.412 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.412 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.412 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.412 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:12.412 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:12.671 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:12.671 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.671 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:12.671 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:12.671 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:12.671 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.671 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:11:12.671 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.671 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.671 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.671 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:12.671 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:12.671 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:13.238 00:11:13.238 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.238 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.238 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.496 { 00:11:13.496 "cntlid": 7, 00:11:13.496 "qid": 0, 00:11:13.496 "state": "enabled", 00:11:13.496 "thread": "nvmf_tgt_poll_group_000", 00:11:13.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:13.496 "listen_address": { 00:11:13.496 "trtype": "TCP", 00:11:13.496 "adrfam": "IPv4", 00:11:13.496 "traddr": "10.0.0.3", 00:11:13.496 "trsvcid": "4420" 00:11:13.496 }, 00:11:13.496 "peer_address": { 00:11:13.496 "trtype": "TCP", 00:11:13.496 "adrfam": "IPv4", 00:11:13.496 "traddr": "10.0.0.1", 00:11:13.496 "trsvcid": "57996" 00:11:13.496 }, 00:11:13.496 "auth": { 00:11:13.496 "state": "completed", 00:11:13.496 "digest": "sha256", 00:11:13.496 "dhgroup": "null" 00:11:13.496 } 00:11:13.496 } 00:11:13.496 ]' 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.496 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.064 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:11:14.064 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:11:14.632 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.632 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:14.632 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.632 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.632 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.632 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:14.632 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.632 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:14.632 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:14.892 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:14.892 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.892 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:14.892 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:14.892 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:14.892 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.892 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.892 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.892 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.892 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.892 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.892 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.892 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.235 00:11:15.235 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.235 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.235 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.498 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.498 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.498 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.498 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.498 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.498 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.498 { 00:11:15.498 "cntlid": 9, 00:11:15.498 "qid": 0, 00:11:15.498 "state": "enabled", 00:11:15.498 "thread": "nvmf_tgt_poll_group_000", 00:11:15.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:15.498 "listen_address": { 00:11:15.498 "trtype": "TCP", 00:11:15.498 "adrfam": "IPv4", 00:11:15.498 "traddr": "10.0.0.3", 00:11:15.498 "trsvcid": "4420" 00:11:15.498 }, 00:11:15.498 "peer_address": { 00:11:15.498 "trtype": "TCP", 00:11:15.498 "adrfam": "IPv4", 00:11:15.498 "traddr": "10.0.0.1", 00:11:15.498 "trsvcid": "58040" 00:11:15.498 }, 00:11:15.498 "auth": { 00:11:15.498 "state": "completed", 00:11:15.498 "digest": "sha256", 00:11:15.498 "dhgroup": "ffdhe2048" 00:11:15.498 } 00:11:15.498 } 00:11:15.498 ]' 00:11:15.498 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.498 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.498 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.758 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:15.758 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.758 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.758 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.758 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.016 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:11:16.016 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:11:16.583 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.583 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:16.583 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.583 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.583 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.583 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.583 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:16.583 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:17.151 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:17.151 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.151 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:17.151 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:17.151 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:17.151 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.151 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.151 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.151 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.151 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.151 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.151 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.151 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.410 00:11:17.410 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.410 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.410 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.669 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.669 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.669 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.669 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.669 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.669 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.669 { 00:11:17.669 "cntlid": 11, 00:11:17.669 "qid": 0, 00:11:17.669 "state": "enabled", 00:11:17.669 "thread": "nvmf_tgt_poll_group_000", 00:11:17.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:17.669 "listen_address": { 00:11:17.669 "trtype": "TCP", 00:11:17.669 "adrfam": "IPv4", 00:11:17.669 "traddr": "10.0.0.3", 00:11:17.669 "trsvcid": "4420" 00:11:17.669 }, 00:11:17.669 "peer_address": { 00:11:17.669 "trtype": "TCP", 00:11:17.669 "adrfam": "IPv4", 00:11:17.669 "traddr": "10.0.0.1", 00:11:17.669 "trsvcid": "58058" 00:11:17.669 }, 00:11:17.669 "auth": { 00:11:17.669 "state": "completed", 00:11:17.669 "digest": "sha256", 00:11:17.669 "dhgroup": "ffdhe2048" 00:11:17.669 } 00:11:17.669 } 00:11:17.669 ]' 00:11:17.669 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.669 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.669 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.669 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:17.669 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.927 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.927 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.927 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.185 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:11:18.185 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:11:18.750 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.750 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:18.750 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.750 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.750 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.750 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.750 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:18.750 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:19.318 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:19.318 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.318 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:19.318 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:19.318 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:19.318 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.318 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.318 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.318 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.318 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.318 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.318 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.576 00:11:19.576 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.576 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.576 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.834 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.834 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.834 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.834 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.834 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.834 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.834 { 00:11:19.834 "cntlid": 13, 00:11:19.834 "qid": 0, 00:11:19.834 "state": "enabled", 00:11:19.834 "thread": "nvmf_tgt_poll_group_000", 00:11:19.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:19.834 "listen_address": { 00:11:19.834 "trtype": "TCP", 00:11:19.834 "adrfam": "IPv4", 00:11:19.834 "traddr": "10.0.0.3", 00:11:19.834 "trsvcid": "4420" 00:11:19.834 }, 00:11:19.834 "peer_address": { 00:11:19.834 "trtype": "TCP", 00:11:19.834 "adrfam": "IPv4", 00:11:19.834 "traddr": "10.0.0.1", 00:11:19.834 "trsvcid": "58074" 00:11:19.834 }, 00:11:19.834 "auth": { 00:11:19.834 "state": "completed", 00:11:19.834 "digest": "sha256", 00:11:19.834 "dhgroup": "ffdhe2048" 00:11:19.834 } 00:11:19.834 } 00:11:19.834 ]' 00:11:19.834 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.834 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:19.834 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.834 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:19.834 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.092 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.092 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.092 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.350 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:11:20.350 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:11:20.916 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.917 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:20.917 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.917 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.917 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.917 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.917 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:20.917 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:21.175 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:21.175 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.175 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:21.175 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:21.175 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:21.175 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.175 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:11:21.175 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.175 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.175 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.175 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:21.175 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:21.175 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:21.741 00:11:21.741 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.741 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.741 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.000 { 00:11:22.000 "cntlid": 15, 00:11:22.000 "qid": 0, 00:11:22.000 "state": "enabled", 00:11:22.000 "thread": "nvmf_tgt_poll_group_000", 00:11:22.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:22.000 "listen_address": { 00:11:22.000 "trtype": "TCP", 00:11:22.000 "adrfam": "IPv4", 00:11:22.000 "traddr": "10.0.0.3", 00:11:22.000 "trsvcid": "4420" 00:11:22.000 }, 00:11:22.000 "peer_address": { 00:11:22.000 "trtype": "TCP", 00:11:22.000 "adrfam": "IPv4", 00:11:22.000 "traddr": "10.0.0.1", 00:11:22.000 "trsvcid": "33878" 00:11:22.000 }, 00:11:22.000 "auth": { 00:11:22.000 "state": "completed", 00:11:22.000 "digest": "sha256", 00:11:22.000 "dhgroup": "ffdhe2048" 00:11:22.000 } 00:11:22.000 } 00:11:22.000 ]' 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.000 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.566 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:11:22.567 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:11:23.134 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.134 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:23.134 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.134 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.134 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.134 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:23.134 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.134 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:23.134 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:23.436 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:23.436 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.436 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:23.436 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:23.436 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:23.436 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.436 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.436 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.436 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.436 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.436 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.436 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.003 00:11:24.003 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.003 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.003 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.261 { 00:11:24.261 "cntlid": 17, 00:11:24.261 "qid": 0, 00:11:24.261 "state": "enabled", 00:11:24.261 "thread": "nvmf_tgt_poll_group_000", 00:11:24.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:24.261 "listen_address": { 00:11:24.261 "trtype": "TCP", 00:11:24.261 "adrfam": "IPv4", 00:11:24.261 "traddr": "10.0.0.3", 00:11:24.261 "trsvcid": "4420" 00:11:24.261 }, 00:11:24.261 "peer_address": { 00:11:24.261 "trtype": "TCP", 00:11:24.261 "adrfam": "IPv4", 00:11:24.261 "traddr": "10.0.0.1", 00:11:24.261 "trsvcid": "33900" 00:11:24.261 }, 00:11:24.261 "auth": { 00:11:24.261 "state": "completed", 00:11:24.261 "digest": "sha256", 00:11:24.261 "dhgroup": "ffdhe3072" 00:11:24.261 } 00:11:24.261 } 00:11:24.261 ]' 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.261 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.519 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:11:24.519 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:11:25.455 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.455 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:25.455 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.455 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.455 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.455 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.455 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:25.455 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:25.714 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:25.714 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.714 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:25.714 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:25.714 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:25.714 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.714 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.714 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.714 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.714 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.714 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.714 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.714 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.972 00:11:25.972 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.972 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.972 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.538 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.538 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.538 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.538 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.538 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.538 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.538 { 00:11:26.538 "cntlid": 19, 00:11:26.538 "qid": 0, 00:11:26.538 "state": "enabled", 00:11:26.538 "thread": "nvmf_tgt_poll_group_000", 00:11:26.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:26.538 "listen_address": { 00:11:26.538 "trtype": "TCP", 00:11:26.538 "adrfam": "IPv4", 00:11:26.538 "traddr": "10.0.0.3", 00:11:26.538 "trsvcid": "4420" 00:11:26.538 }, 00:11:26.538 "peer_address": { 00:11:26.538 "trtype": "TCP", 00:11:26.539 "adrfam": "IPv4", 00:11:26.539 "traddr": "10.0.0.1", 00:11:26.539 "trsvcid": "33932" 00:11:26.539 }, 00:11:26.539 "auth": { 00:11:26.539 "state": "completed", 00:11:26.539 "digest": "sha256", 00:11:26.539 "dhgroup": "ffdhe3072" 00:11:26.539 } 00:11:26.539 } 00:11:26.539 ]' 00:11:26.539 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.539 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.539 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.539 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:26.539 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.539 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.539 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.539 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.796 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:11:26.796 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:11:27.730 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.730 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:27.730 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.730 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.730 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.730 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.730 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:27.730 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:27.988 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:27.988 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.988 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:27.988 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:27.988 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:27.988 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.988 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.988 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.988 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.988 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.988 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.988 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.988 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.246 00:11:28.504 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.504 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.504 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.504 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.762 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.762 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.762 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.762 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.762 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.762 { 00:11:28.762 "cntlid": 21, 00:11:28.762 "qid": 0, 00:11:28.762 "state": "enabled", 00:11:28.762 "thread": "nvmf_tgt_poll_group_000", 00:11:28.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:28.762 "listen_address": { 00:11:28.762 "trtype": "TCP", 00:11:28.762 "adrfam": "IPv4", 00:11:28.762 "traddr": "10.0.0.3", 00:11:28.762 "trsvcid": "4420" 00:11:28.762 }, 00:11:28.762 "peer_address": { 00:11:28.762 "trtype": "TCP", 00:11:28.762 "adrfam": "IPv4", 00:11:28.762 "traddr": "10.0.0.1", 00:11:28.762 "trsvcid": "33970" 00:11:28.762 }, 00:11:28.762 "auth": { 00:11:28.762 "state": "completed", 00:11:28.762 "digest": "sha256", 00:11:28.762 "dhgroup": "ffdhe3072" 00:11:28.762 } 00:11:28.762 } 00:11:28.762 ]' 00:11:28.762 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.762 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.762 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.762 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:28.762 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.762 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.762 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.762 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.021 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:11:29.021 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:11:29.588 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.588 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:29.588 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.588 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.588 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.588 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.588 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:29.588 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:30.161 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:30.161 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.161 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:30.161 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:30.161 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:30.161 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.161 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:11:30.161 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.161 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.161 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.161 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:30.161 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:30.161 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:30.420 00:11:30.420 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.420 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.420 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.680 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.680 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.680 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.680 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.680 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.680 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.680 { 00:11:30.680 "cntlid": 23, 00:11:30.680 "qid": 0, 00:11:30.680 "state": "enabled", 00:11:30.680 "thread": "nvmf_tgt_poll_group_000", 00:11:30.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:30.680 "listen_address": { 00:11:30.680 "trtype": "TCP", 00:11:30.680 "adrfam": "IPv4", 00:11:30.680 "traddr": "10.0.0.3", 00:11:30.680 "trsvcid": "4420" 00:11:30.680 }, 00:11:30.680 "peer_address": { 00:11:30.680 "trtype": "TCP", 00:11:30.680 "adrfam": "IPv4", 00:11:30.680 "traddr": "10.0.0.1", 00:11:30.680 "trsvcid": "33998" 00:11:30.680 }, 00:11:30.680 "auth": { 00:11:30.680 "state": "completed", 00:11:30.680 "digest": "sha256", 00:11:30.680 "dhgroup": "ffdhe3072" 00:11:30.680 } 00:11:30.680 } 00:11:30.680 ]' 00:11:30.680 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:30.680 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:30.680 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:30.680 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:30.680 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:30.939 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.939 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.939 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.206 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:11:31.206 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:11:31.794 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.794 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:31.794 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.794 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.794 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.794 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:31.794 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.794 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:31.794 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:32.053 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:32.053 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.053 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:32.053 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:32.053 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:32.053 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.053 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.053 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.053 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.053 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.053 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.053 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.053 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.312 00:11:32.569 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.569 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.569 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.827 { 00:11:32.827 "cntlid": 25, 00:11:32.827 "qid": 0, 00:11:32.827 "state": "enabled", 00:11:32.827 "thread": "nvmf_tgt_poll_group_000", 00:11:32.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:32.827 "listen_address": { 00:11:32.827 "trtype": "TCP", 00:11:32.827 "adrfam": "IPv4", 00:11:32.827 "traddr": "10.0.0.3", 00:11:32.827 "trsvcid": "4420" 00:11:32.827 }, 00:11:32.827 "peer_address": { 00:11:32.827 "trtype": "TCP", 00:11:32.827 "adrfam": "IPv4", 00:11:32.827 "traddr": "10.0.0.1", 00:11:32.827 "trsvcid": "39878" 00:11:32.827 }, 00:11:32.827 "auth": { 00:11:32.827 "state": "completed", 00:11:32.827 "digest": "sha256", 00:11:32.827 "dhgroup": "ffdhe4096" 00:11:32.827 } 00:11:32.827 } 00:11:32.827 ]' 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.827 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.086 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:11:33.086 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:11:34.023 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.023 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:34.023 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.023 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.023 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.023 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.023 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:34.023 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:34.281 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:34.281 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.281 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:34.281 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:34.281 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:34.281 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.281 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.281 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.281 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.281 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.281 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.281 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.281 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.540 00:11:34.540 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.540 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.540 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.798 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.799 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.799 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.799 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.799 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.799 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.799 { 00:11:34.799 "cntlid": 27, 00:11:34.799 "qid": 0, 00:11:34.799 "state": "enabled", 00:11:34.799 "thread": "nvmf_tgt_poll_group_000", 00:11:34.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:34.799 "listen_address": { 00:11:34.799 "trtype": "TCP", 00:11:34.799 "adrfam": "IPv4", 00:11:34.799 "traddr": "10.0.0.3", 00:11:34.799 "trsvcid": "4420" 00:11:34.799 }, 00:11:34.799 "peer_address": { 00:11:34.799 "trtype": "TCP", 00:11:34.799 "adrfam": "IPv4", 00:11:34.799 "traddr": "10.0.0.1", 00:11:34.799 "trsvcid": "39912" 00:11:34.799 }, 00:11:34.799 "auth": { 00:11:34.799 "state": "completed", 00:11:34.799 "digest": "sha256", 00:11:34.799 "dhgroup": "ffdhe4096" 00:11:34.799 } 00:11:34.799 } 00:11:34.799 ]' 00:11:34.799 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.799 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.799 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.799 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:34.799 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.057 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.057 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.057 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.316 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:11:35.316 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:11:35.882 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.882 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:35.882 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.882 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.882 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.882 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.882 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:35.882 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:36.449 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:36.449 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.449 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:36.449 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:36.449 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:36.449 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.449 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.449 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.449 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.449 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.449 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.449 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.449 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.707 00:11:36.707 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.707 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.707 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.273 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.273 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.273 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.273 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.273 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.273 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.273 { 00:11:37.273 "cntlid": 29, 00:11:37.273 "qid": 0, 00:11:37.273 "state": "enabled", 00:11:37.273 "thread": "nvmf_tgt_poll_group_000", 00:11:37.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:37.273 "listen_address": { 00:11:37.273 "trtype": "TCP", 00:11:37.273 "adrfam": "IPv4", 00:11:37.273 "traddr": "10.0.0.3", 00:11:37.273 "trsvcid": "4420" 00:11:37.273 }, 00:11:37.273 "peer_address": { 00:11:37.273 "trtype": "TCP", 00:11:37.273 "adrfam": "IPv4", 00:11:37.273 "traddr": "10.0.0.1", 00:11:37.273 "trsvcid": "39956" 00:11:37.273 }, 00:11:37.273 "auth": { 00:11:37.273 "state": "completed", 00:11:37.273 "digest": "sha256", 00:11:37.273 "dhgroup": "ffdhe4096" 00:11:37.273 } 00:11:37.273 } 00:11:37.273 ]' 00:11:37.274 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.274 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.274 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.274 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:37.274 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.274 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.274 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.274 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.532 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:11:37.532 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:11:38.467 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.467 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:38.467 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.467 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.467 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.467 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.467 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:38.467 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:38.727 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:38.727 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.727 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:38.727 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:38.727 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:38.727 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.727 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:11:38.727 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.727 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.727 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.727 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:38.727 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:38.727 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:38.986 00:11:38.986 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.986 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.986 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.313 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.313 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.313 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.313 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.313 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.313 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.313 { 00:11:39.313 "cntlid": 31, 00:11:39.313 "qid": 0, 00:11:39.313 "state": "enabled", 00:11:39.313 "thread": "nvmf_tgt_poll_group_000", 00:11:39.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:39.313 "listen_address": { 00:11:39.313 "trtype": "TCP", 00:11:39.313 "adrfam": "IPv4", 00:11:39.313 "traddr": "10.0.0.3", 00:11:39.313 "trsvcid": "4420" 00:11:39.313 }, 00:11:39.313 "peer_address": { 00:11:39.313 "trtype": "TCP", 00:11:39.313 "adrfam": "IPv4", 00:11:39.313 "traddr": "10.0.0.1", 00:11:39.313 "trsvcid": "39992" 00:11:39.313 }, 00:11:39.313 "auth": { 00:11:39.313 "state": "completed", 00:11:39.313 "digest": "sha256", 00:11:39.313 "dhgroup": "ffdhe4096" 00:11:39.313 } 00:11:39.313 } 00:11:39.313 ]' 00:11:39.313 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.313 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:39.313 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.313 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:39.313 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.572 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.572 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.572 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.832 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:11:39.832 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:11:40.399 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.399 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:40.399 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.399 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:40.399 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.399 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:40.399 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:40.659 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:40.659 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.659 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:40.659 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:40.659 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:40.659 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.659 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.659 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.659 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.659 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.659 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.659 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.659 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.227 00:11:41.227 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.227 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.227 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.486 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.486 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.486 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.486 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.486 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.486 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.486 { 00:11:41.486 "cntlid": 33, 00:11:41.486 "qid": 0, 00:11:41.486 "state": "enabled", 00:11:41.486 "thread": "nvmf_tgt_poll_group_000", 00:11:41.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:41.486 "listen_address": { 00:11:41.486 "trtype": "TCP", 00:11:41.486 "adrfam": "IPv4", 00:11:41.486 "traddr": "10.0.0.3", 00:11:41.486 "trsvcid": "4420" 00:11:41.486 }, 00:11:41.486 "peer_address": { 00:11:41.486 "trtype": "TCP", 00:11:41.486 "adrfam": "IPv4", 00:11:41.486 "traddr": "10.0.0.1", 00:11:41.486 "trsvcid": "40016" 00:11:41.486 }, 00:11:41.486 "auth": { 00:11:41.486 "state": "completed", 00:11:41.486 "digest": "sha256", 00:11:41.486 "dhgroup": "ffdhe6144" 00:11:41.486 } 00:11:41.486 } 00:11:41.486 ]' 00:11:41.486 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.486 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:41.486 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.744 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:41.744 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.744 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.744 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.744 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.003 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:11:42.003 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:11:42.570 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.570 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:42.570 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.570 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.570 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.570 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.570 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:42.570 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:42.829 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:42.829 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.829 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:42.829 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:42.829 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:42.829 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.829 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.829 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.829 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.829 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.829 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.829 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.829 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.406 00:11:43.406 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.406 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.406 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.709 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.709 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.709 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.709 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.709 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.709 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.709 { 00:11:43.709 "cntlid": 35, 00:11:43.709 "qid": 0, 00:11:43.709 "state": "enabled", 00:11:43.709 "thread": "nvmf_tgt_poll_group_000", 00:11:43.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:43.709 "listen_address": { 00:11:43.709 "trtype": "TCP", 00:11:43.709 "adrfam": "IPv4", 00:11:43.709 "traddr": "10.0.0.3", 00:11:43.709 "trsvcid": "4420" 00:11:43.709 }, 00:11:43.709 "peer_address": { 00:11:43.709 "trtype": "TCP", 00:11:43.709 "adrfam": "IPv4", 00:11:43.709 "traddr": "10.0.0.1", 00:11:43.709 "trsvcid": "35352" 00:11:43.709 }, 00:11:43.709 "auth": { 00:11:43.709 "state": "completed", 00:11:43.709 "digest": "sha256", 00:11:43.709 "dhgroup": "ffdhe6144" 00:11:43.709 } 00:11:43.709 } 00:11:43.709 ]' 00:11:43.709 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.967 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.967 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.967 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:43.967 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.967 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.967 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.967 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.226 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:11:44.226 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.164 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.734 00:11:45.734 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.734 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.734 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.301 { 00:11:46.301 "cntlid": 37, 00:11:46.301 "qid": 0, 00:11:46.301 "state": "enabled", 00:11:46.301 "thread": "nvmf_tgt_poll_group_000", 00:11:46.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:46.301 "listen_address": { 00:11:46.301 "trtype": "TCP", 00:11:46.301 "adrfam": "IPv4", 00:11:46.301 "traddr": "10.0.0.3", 00:11:46.301 "trsvcid": "4420" 00:11:46.301 }, 00:11:46.301 "peer_address": { 00:11:46.301 "trtype": "TCP", 00:11:46.301 "adrfam": "IPv4", 00:11:46.301 "traddr": "10.0.0.1", 00:11:46.301 "trsvcid": "35368" 00:11:46.301 }, 00:11:46.301 "auth": { 00:11:46.301 "state": "completed", 00:11:46.301 "digest": "sha256", 00:11:46.301 "dhgroup": "ffdhe6144" 00:11:46.301 } 00:11:46.301 } 00:11:46.301 ]' 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.301 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.558 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:11:46.559 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:11:47.494 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.494 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:47.494 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.494 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.494 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.494 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.494 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:47.494 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:47.753 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:47.753 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.753 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.753 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:47.753 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:47.753 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.753 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:11:47.753 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.753 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.753 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.753 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:47.753 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:47.753 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:48.321 00:11:48.321 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.321 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.321 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.581 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.581 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.581 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.581 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.581 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.581 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.581 { 00:11:48.581 "cntlid": 39, 00:11:48.581 "qid": 0, 00:11:48.581 "state": "enabled", 00:11:48.581 "thread": "nvmf_tgt_poll_group_000", 00:11:48.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:48.581 "listen_address": { 00:11:48.581 "trtype": "TCP", 00:11:48.581 "adrfam": "IPv4", 00:11:48.581 "traddr": "10.0.0.3", 00:11:48.581 "trsvcid": "4420" 00:11:48.581 }, 00:11:48.581 "peer_address": { 00:11:48.581 "trtype": "TCP", 00:11:48.581 "adrfam": "IPv4", 00:11:48.581 "traddr": "10.0.0.1", 00:11:48.581 "trsvcid": "35390" 00:11:48.581 }, 00:11:48.581 "auth": { 00:11:48.581 "state": "completed", 00:11:48.581 "digest": "sha256", 00:11:48.581 "dhgroup": "ffdhe6144" 00:11:48.581 } 00:11:48.581 } 00:11:48.581 ]' 00:11:48.581 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.581 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.581 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.581 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:48.581 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.581 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.581 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.581 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.148 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:11:49.148 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:11:49.715 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.715 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:49.715 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.715 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.715 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.715 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:49.715 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.715 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:49.715 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:49.973 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:49.973 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.973 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:49.973 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:49.973 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:49.973 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.973 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.973 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.973 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.973 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.973 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.973 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.973 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.538 00:11:50.538 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.538 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.538 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.105 { 00:11:51.105 "cntlid": 41, 00:11:51.105 "qid": 0, 00:11:51.105 "state": "enabled", 00:11:51.105 "thread": "nvmf_tgt_poll_group_000", 00:11:51.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:51.105 "listen_address": { 00:11:51.105 "trtype": "TCP", 00:11:51.105 "adrfam": "IPv4", 00:11:51.105 "traddr": "10.0.0.3", 00:11:51.105 "trsvcid": "4420" 00:11:51.105 }, 00:11:51.105 "peer_address": { 00:11:51.105 "trtype": "TCP", 00:11:51.105 "adrfam": "IPv4", 00:11:51.105 "traddr": "10.0.0.1", 00:11:51.105 "trsvcid": "35404" 00:11:51.105 }, 00:11:51.105 "auth": { 00:11:51.105 "state": "completed", 00:11:51.105 "digest": "sha256", 00:11:51.105 "dhgroup": "ffdhe8192" 00:11:51.105 } 00:11:51.105 } 00:11:51.105 ]' 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.105 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.363 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:11:51.363 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:11:52.297 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.297 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:52.297 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.298 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.298 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.298 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.298 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:52.298 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:52.555 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:52.555 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.555 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:52.555 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:52.555 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:52.555 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.555 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.555 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.555 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.555 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.555 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.555 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.555 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.120 00:11:53.121 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.121 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.121 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.378 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.378 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.378 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.378 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.378 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.378 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.378 { 00:11:53.378 "cntlid": 43, 00:11:53.378 "qid": 0, 00:11:53.378 "state": "enabled", 00:11:53.378 "thread": "nvmf_tgt_poll_group_000", 00:11:53.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:53.378 "listen_address": { 00:11:53.378 "trtype": "TCP", 00:11:53.378 "adrfam": "IPv4", 00:11:53.378 "traddr": "10.0.0.3", 00:11:53.378 "trsvcid": "4420" 00:11:53.378 }, 00:11:53.378 "peer_address": { 00:11:53.378 "trtype": "TCP", 00:11:53.378 "adrfam": "IPv4", 00:11:53.378 "traddr": "10.0.0.1", 00:11:53.378 "trsvcid": "48512" 00:11:53.378 }, 00:11:53.378 "auth": { 00:11:53.379 "state": "completed", 00:11:53.379 "digest": "sha256", 00:11:53.379 "dhgroup": "ffdhe8192" 00:11:53.379 } 00:11:53.379 } 00:11:53.379 ]' 00:11:53.379 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.637 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:53.637 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.637 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:53.637 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.637 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.637 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.637 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.895 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:11:53.895 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:11:54.830 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.830 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:54.830 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.830 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.830 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.830 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.830 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:54.830 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:54.830 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:54.830 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.830 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:54.830 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:54.830 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:54.830 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.830 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.830 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.830 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.830 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.830 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.830 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.830 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.764 00:11:55.764 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.764 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.765 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.023 { 00:11:56.023 "cntlid": 45, 00:11:56.023 "qid": 0, 00:11:56.023 "state": "enabled", 00:11:56.023 "thread": "nvmf_tgt_poll_group_000", 00:11:56.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:56.023 "listen_address": { 00:11:56.023 "trtype": "TCP", 00:11:56.023 "adrfam": "IPv4", 00:11:56.023 "traddr": "10.0.0.3", 00:11:56.023 "trsvcid": "4420" 00:11:56.023 }, 00:11:56.023 "peer_address": { 00:11:56.023 "trtype": "TCP", 00:11:56.023 "adrfam": "IPv4", 00:11:56.023 "traddr": "10.0.0.1", 00:11:56.023 "trsvcid": "48548" 00:11:56.023 }, 00:11:56.023 "auth": { 00:11:56.023 "state": "completed", 00:11:56.023 "digest": "sha256", 00:11:56.023 "dhgroup": "ffdhe8192" 00:11:56.023 } 00:11:56.023 } 00:11:56.023 ]' 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.023 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.589 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:11:56.589 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:11:57.154 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.154 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:57.154 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.154 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.154 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.154 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.154 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:57.154 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:57.413 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:57.413 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.413 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:57.413 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:57.413 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:57.413 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.413 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:11:57.413 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.413 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.413 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.413 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:57.413 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:57.413 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.347 00:11:58.347 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.347 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.347 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.605 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.605 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.605 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.605 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.605 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.605 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.605 { 00:11:58.605 "cntlid": 47, 00:11:58.605 "qid": 0, 00:11:58.605 "state": "enabled", 00:11:58.605 "thread": "nvmf_tgt_poll_group_000", 00:11:58.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:11:58.605 "listen_address": { 00:11:58.605 "trtype": "TCP", 00:11:58.605 "adrfam": "IPv4", 00:11:58.605 "traddr": "10.0.0.3", 00:11:58.605 "trsvcid": "4420" 00:11:58.605 }, 00:11:58.605 "peer_address": { 00:11:58.605 "trtype": "TCP", 00:11:58.605 "adrfam": "IPv4", 00:11:58.605 "traddr": "10.0.0.1", 00:11:58.605 "trsvcid": "48572" 00:11:58.605 }, 00:11:58.605 "auth": { 00:11:58.605 "state": "completed", 00:11:58.605 "digest": "sha256", 00:11:58.605 "dhgroup": "ffdhe8192" 00:11:58.605 } 00:11:58.605 } 00:11:58.605 ]' 00:11:58.605 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.605 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:58.605 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.605 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:58.605 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.864 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.864 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.864 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.130 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:11:59.130 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:11:59.708 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.708 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:11:59.708 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.708 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.708 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.708 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:59.708 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:59.708 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.708 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:59.708 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:00.275 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:00.275 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.275 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:00.275 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:00.275 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:00.275 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.275 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.275 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.275 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.275 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.275 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.275 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.275 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.534 00:12:00.534 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.534 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.534 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.792 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.792 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.792 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.792 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.792 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.792 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.792 { 00:12:00.792 "cntlid": 49, 00:12:00.792 "qid": 0, 00:12:00.792 "state": "enabled", 00:12:00.792 "thread": "nvmf_tgt_poll_group_000", 00:12:00.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:00.792 "listen_address": { 00:12:00.792 "trtype": "TCP", 00:12:00.792 "adrfam": "IPv4", 00:12:00.792 "traddr": "10.0.0.3", 00:12:00.792 "trsvcid": "4420" 00:12:00.792 }, 00:12:00.792 "peer_address": { 00:12:00.792 "trtype": "TCP", 00:12:00.792 "adrfam": "IPv4", 00:12:00.792 "traddr": "10.0.0.1", 00:12:00.792 "trsvcid": "48604" 00:12:00.792 }, 00:12:00.792 "auth": { 00:12:00.792 "state": "completed", 00:12:00.792 "digest": "sha384", 00:12:00.792 "dhgroup": "null" 00:12:00.792 } 00:12:00.792 } 00:12:00.792 ]' 00:12:00.792 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.050 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.050 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.050 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:01.050 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.050 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.050 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.050 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.308 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:01.308 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:02.241 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.241 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:02.241 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.241 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.241 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.241 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.241 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:02.241 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:02.498 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:02.498 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.498 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:02.498 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:02.498 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:02.498 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.498 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.498 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.498 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.498 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.498 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.498 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.498 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.756 00:12:02.756 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.756 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.756 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.014 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.014 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.014 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.014 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.272 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.272 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.272 { 00:12:03.272 "cntlid": 51, 00:12:03.272 "qid": 0, 00:12:03.272 "state": "enabled", 00:12:03.272 "thread": "nvmf_tgt_poll_group_000", 00:12:03.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:03.272 "listen_address": { 00:12:03.272 "trtype": "TCP", 00:12:03.272 "adrfam": "IPv4", 00:12:03.272 "traddr": "10.0.0.3", 00:12:03.272 "trsvcid": "4420" 00:12:03.272 }, 00:12:03.272 "peer_address": { 00:12:03.272 "trtype": "TCP", 00:12:03.272 "adrfam": "IPv4", 00:12:03.272 "traddr": "10.0.0.1", 00:12:03.272 "trsvcid": "51624" 00:12:03.272 }, 00:12:03.272 "auth": { 00:12:03.272 "state": "completed", 00:12:03.272 "digest": "sha384", 00:12:03.272 "dhgroup": "null" 00:12:03.272 } 00:12:03.272 } 00:12:03.272 ]' 00:12:03.272 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.272 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:03.272 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.272 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:03.272 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.272 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.272 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.272 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.531 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:03.531 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:04.466 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.466 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:04.466 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.466 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.466 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.466 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.466 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:04.467 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:04.725 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:04.725 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.725 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:04.725 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:04.725 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:04.725 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.725 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.725 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.725 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.725 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.725 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.725 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.725 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.983 00:12:04.983 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.983 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.983 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.240 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.240 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.241 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.241 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.241 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.241 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.241 { 00:12:05.241 "cntlid": 53, 00:12:05.241 "qid": 0, 00:12:05.241 "state": "enabled", 00:12:05.241 "thread": "nvmf_tgt_poll_group_000", 00:12:05.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:05.241 "listen_address": { 00:12:05.241 "trtype": "TCP", 00:12:05.241 "adrfam": "IPv4", 00:12:05.241 "traddr": "10.0.0.3", 00:12:05.241 "trsvcid": "4420" 00:12:05.241 }, 00:12:05.241 "peer_address": { 00:12:05.241 "trtype": "TCP", 00:12:05.241 "adrfam": "IPv4", 00:12:05.241 "traddr": "10.0.0.1", 00:12:05.241 "trsvcid": "51666" 00:12:05.241 }, 00:12:05.241 "auth": { 00:12:05.241 "state": "completed", 00:12:05.241 "digest": "sha384", 00:12:05.241 "dhgroup": "null" 00:12:05.241 } 00:12:05.241 } 00:12:05.241 ]' 00:12:05.241 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.498 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:05.498 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.498 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:05.498 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.498 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.498 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.498 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.756 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:12:05.756 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:12:06.718 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.718 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:06.718 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.718 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.718 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.718 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.718 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:06.719 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:06.719 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:06.719 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.719 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:06.719 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:06.719 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:06.719 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.719 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:12:06.719 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.719 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.719 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.719 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:06.719 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:06.719 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.285 00:12:07.285 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.285 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.285 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.543 { 00:12:07.543 "cntlid": 55, 00:12:07.543 "qid": 0, 00:12:07.543 "state": "enabled", 00:12:07.543 "thread": "nvmf_tgt_poll_group_000", 00:12:07.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:07.543 "listen_address": { 00:12:07.543 "trtype": "TCP", 00:12:07.543 "adrfam": "IPv4", 00:12:07.543 "traddr": "10.0.0.3", 00:12:07.543 "trsvcid": "4420" 00:12:07.543 }, 00:12:07.543 "peer_address": { 00:12:07.543 "trtype": "TCP", 00:12:07.543 "adrfam": "IPv4", 00:12:07.543 "traddr": "10.0.0.1", 00:12:07.543 "trsvcid": "51678" 00:12:07.543 }, 00:12:07.543 "auth": { 00:12:07.543 "state": "completed", 00:12:07.543 "digest": "sha384", 00:12:07.543 "dhgroup": "null" 00:12:07.543 } 00:12:07.543 } 00:12:07.543 ]' 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.801 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:12:07.801 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:12:08.737 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.737 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:08.737 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.737 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.737 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.737 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:08.737 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.737 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:08.737 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:08.993 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:08.994 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.994 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:08.994 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:08.994 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:08.994 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.994 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.994 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.994 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.994 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.994 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.994 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.994 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.296 00:12:09.296 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.296 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.296 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.554 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.554 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.554 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.554 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.554 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.554 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.554 { 00:12:09.554 "cntlid": 57, 00:12:09.554 "qid": 0, 00:12:09.554 "state": "enabled", 00:12:09.554 "thread": "nvmf_tgt_poll_group_000", 00:12:09.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:09.554 "listen_address": { 00:12:09.554 "trtype": "TCP", 00:12:09.554 "adrfam": "IPv4", 00:12:09.554 "traddr": "10.0.0.3", 00:12:09.554 "trsvcid": "4420" 00:12:09.554 }, 00:12:09.554 "peer_address": { 00:12:09.554 "trtype": "TCP", 00:12:09.554 "adrfam": "IPv4", 00:12:09.554 "traddr": "10.0.0.1", 00:12:09.554 "trsvcid": "51710" 00:12:09.554 }, 00:12:09.554 "auth": { 00:12:09.554 "state": "completed", 00:12:09.554 "digest": "sha384", 00:12:09.554 "dhgroup": "ffdhe2048" 00:12:09.554 } 00:12:09.554 } 00:12:09.554 ]' 00:12:09.554 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.813 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.813 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.813 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:09.813 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.813 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.813 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.813 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.072 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:10.072 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:11.067 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.067 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:11.067 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.067 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.067 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.067 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.067 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:11.067 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:11.325 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:11.325 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.325 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.325 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:11.325 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:11.325 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.325 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.325 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.325 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.325 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.325 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.325 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.325 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.584 00:12:11.584 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.584 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.584 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.843 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.843 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.843 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.843 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.843 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.843 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.843 { 00:12:11.843 "cntlid": 59, 00:12:11.843 "qid": 0, 00:12:11.843 "state": "enabled", 00:12:11.843 "thread": "nvmf_tgt_poll_group_000", 00:12:11.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:11.843 "listen_address": { 00:12:11.843 "trtype": "TCP", 00:12:11.843 "adrfam": "IPv4", 00:12:11.843 "traddr": "10.0.0.3", 00:12:11.843 "trsvcid": "4420" 00:12:11.843 }, 00:12:11.843 "peer_address": { 00:12:11.843 "trtype": "TCP", 00:12:11.843 "adrfam": "IPv4", 00:12:11.843 "traddr": "10.0.0.1", 00:12:11.843 "trsvcid": "41044" 00:12:11.843 }, 00:12:11.843 "auth": { 00:12:11.843 "state": "completed", 00:12:11.843 "digest": "sha384", 00:12:11.843 "dhgroup": "ffdhe2048" 00:12:11.843 } 00:12:11.843 } 00:12:11.843 ]' 00:12:11.843 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.101 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.101 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.101 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:12.101 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.101 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.101 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.101 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.358 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:12.358 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.335 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.902 00:12:13.902 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.902 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.902 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.902 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.902 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.902 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.902 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.902 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.902 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.902 { 00:12:13.902 "cntlid": 61, 00:12:13.902 "qid": 0, 00:12:13.902 "state": "enabled", 00:12:13.902 "thread": "nvmf_tgt_poll_group_000", 00:12:13.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:13.902 "listen_address": { 00:12:13.902 "trtype": "TCP", 00:12:13.902 "adrfam": "IPv4", 00:12:13.902 "traddr": "10.0.0.3", 00:12:13.902 "trsvcid": "4420" 00:12:13.902 }, 00:12:13.902 "peer_address": { 00:12:13.902 "trtype": "TCP", 00:12:13.902 "adrfam": "IPv4", 00:12:13.902 "traddr": "10.0.0.1", 00:12:13.902 "trsvcid": "41080" 00:12:13.902 }, 00:12:13.902 "auth": { 00:12:13.902 "state": "completed", 00:12:13.902 "digest": "sha384", 00:12:13.902 "dhgroup": "ffdhe2048" 00:12:13.902 } 00:12:13.902 } 00:12:13.902 ]' 00:12:13.902 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.161 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.161 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.161 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:14.161 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.161 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.161 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.161 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.420 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:12:14.420 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:12:14.985 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.243 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:15.243 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.243 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.243 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.243 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.243 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:15.243 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:15.501 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:15.501 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.501 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:15.501 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:15.501 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:15.501 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.501 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:12:15.501 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.501 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.501 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.501 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:15.501 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.501 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.760 00:12:15.760 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.760 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.760 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.326 { 00:12:16.326 "cntlid": 63, 00:12:16.326 "qid": 0, 00:12:16.326 "state": "enabled", 00:12:16.326 "thread": "nvmf_tgt_poll_group_000", 00:12:16.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:16.326 "listen_address": { 00:12:16.326 "trtype": "TCP", 00:12:16.326 "adrfam": "IPv4", 00:12:16.326 "traddr": "10.0.0.3", 00:12:16.326 "trsvcid": "4420" 00:12:16.326 }, 00:12:16.326 "peer_address": { 00:12:16.326 "trtype": "TCP", 00:12:16.326 "adrfam": "IPv4", 00:12:16.326 "traddr": "10.0.0.1", 00:12:16.326 "trsvcid": "41108" 00:12:16.326 }, 00:12:16.326 "auth": { 00:12:16.326 "state": "completed", 00:12:16.326 "digest": "sha384", 00:12:16.326 "dhgroup": "ffdhe2048" 00:12:16.326 } 00:12:16.326 } 00:12:16.326 ]' 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.326 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.584 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:12:16.584 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:12:17.527 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.527 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:17.527 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.528 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.529 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.529 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.529 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.529 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.096 00:12:18.096 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.096 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.096 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.355 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.355 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.355 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.355 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.355 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.355 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.355 { 00:12:18.355 "cntlid": 65, 00:12:18.355 "qid": 0, 00:12:18.355 "state": "enabled", 00:12:18.355 "thread": "nvmf_tgt_poll_group_000", 00:12:18.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:18.355 "listen_address": { 00:12:18.355 "trtype": "TCP", 00:12:18.355 "adrfam": "IPv4", 00:12:18.355 "traddr": "10.0.0.3", 00:12:18.355 "trsvcid": "4420" 00:12:18.355 }, 00:12:18.355 "peer_address": { 00:12:18.355 "trtype": "TCP", 00:12:18.355 "adrfam": "IPv4", 00:12:18.355 "traddr": "10.0.0.1", 00:12:18.355 "trsvcid": "41136" 00:12:18.355 }, 00:12:18.355 "auth": { 00:12:18.355 "state": "completed", 00:12:18.355 "digest": "sha384", 00:12:18.355 "dhgroup": "ffdhe3072" 00:12:18.355 } 00:12:18.355 } 00:12:18.355 ]' 00:12:18.355 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.355 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:18.355 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.355 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:18.355 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.613 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.613 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.613 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.872 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:18.872 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:19.440 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.440 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:19.440 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.440 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.440 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.440 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.440 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:19.440 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:20.007 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:20.007 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.007 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:20.007 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:20.007 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:20.007 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.007 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.007 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.007 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.007 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.007 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.007 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.007 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.266 00:12:20.266 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.266 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.266 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.524 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.524 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.524 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.524 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.524 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.524 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.524 { 00:12:20.524 "cntlid": 67, 00:12:20.524 "qid": 0, 00:12:20.524 "state": "enabled", 00:12:20.524 "thread": "nvmf_tgt_poll_group_000", 00:12:20.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:20.524 "listen_address": { 00:12:20.524 "trtype": "TCP", 00:12:20.524 "adrfam": "IPv4", 00:12:20.524 "traddr": "10.0.0.3", 00:12:20.524 "trsvcid": "4420" 00:12:20.524 }, 00:12:20.524 "peer_address": { 00:12:20.524 "trtype": "TCP", 00:12:20.524 "adrfam": "IPv4", 00:12:20.524 "traddr": "10.0.0.1", 00:12:20.524 "trsvcid": "41168" 00:12:20.524 }, 00:12:20.524 "auth": { 00:12:20.524 "state": "completed", 00:12:20.524 "digest": "sha384", 00:12:20.524 "dhgroup": "ffdhe3072" 00:12:20.524 } 00:12:20.524 } 00:12:20.524 ]' 00:12:20.524 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.524 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:20.524 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.524 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:20.524 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.783 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.783 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.783 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.041 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:21.041 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:21.607 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.607 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:21.607 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.607 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.607 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.607 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.607 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:21.607 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:21.865 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:21.865 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.865 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:21.865 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:21.865 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:21.865 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.865 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.865 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.865 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.865 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.865 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.865 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.865 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.432 00:12:22.432 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.432 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.432 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.729 { 00:12:22.729 "cntlid": 69, 00:12:22.729 "qid": 0, 00:12:22.729 "state": "enabled", 00:12:22.729 "thread": "nvmf_tgt_poll_group_000", 00:12:22.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:22.729 "listen_address": { 00:12:22.729 "trtype": "TCP", 00:12:22.729 "adrfam": "IPv4", 00:12:22.729 "traddr": "10.0.0.3", 00:12:22.729 "trsvcid": "4420" 00:12:22.729 }, 00:12:22.729 "peer_address": { 00:12:22.729 "trtype": "TCP", 00:12:22.729 "adrfam": "IPv4", 00:12:22.729 "traddr": "10.0.0.1", 00:12:22.729 "trsvcid": "57966" 00:12:22.729 }, 00:12:22.729 "auth": { 00:12:22.729 "state": "completed", 00:12:22.729 "digest": "sha384", 00:12:22.729 "dhgroup": "ffdhe3072" 00:12:22.729 } 00:12:22.729 } 00:12:22.729 ]' 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.729 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.988 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:12:22.988 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:12:23.924 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.924 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:23.924 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.924 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.924 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.924 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.924 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:23.924 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:24.183 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:24.183 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.183 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:24.183 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:24.183 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:24.183 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.183 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:12:24.183 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.183 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.183 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.183 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:24.183 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:24.183 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:24.750 00:12:24.750 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.750 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.750 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.009 { 00:12:25.009 "cntlid": 71, 00:12:25.009 "qid": 0, 00:12:25.009 "state": "enabled", 00:12:25.009 "thread": "nvmf_tgt_poll_group_000", 00:12:25.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:25.009 "listen_address": { 00:12:25.009 "trtype": "TCP", 00:12:25.009 "adrfam": "IPv4", 00:12:25.009 "traddr": "10.0.0.3", 00:12:25.009 "trsvcid": "4420" 00:12:25.009 }, 00:12:25.009 "peer_address": { 00:12:25.009 "trtype": "TCP", 00:12:25.009 "adrfam": "IPv4", 00:12:25.009 "traddr": "10.0.0.1", 00:12:25.009 "trsvcid": "57994" 00:12:25.009 }, 00:12:25.009 "auth": { 00:12:25.009 "state": "completed", 00:12:25.009 "digest": "sha384", 00:12:25.009 "dhgroup": "ffdhe3072" 00:12:25.009 } 00:12:25.009 } 00:12:25.009 ]' 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.009 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.268 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:12:25.268 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:12:26.201 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.201 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:26.201 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.201 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.201 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.201 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:26.201 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.201 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:26.201 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:26.459 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:26.459 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.459 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:26.459 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:26.459 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:26.459 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.459 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.459 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.459 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.459 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.459 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.459 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.459 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.717 00:12:26.718 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.718 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.718 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.976 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.976 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.976 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.976 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.976 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.976 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.976 { 00:12:26.976 "cntlid": 73, 00:12:26.976 "qid": 0, 00:12:26.976 "state": "enabled", 00:12:26.976 "thread": "nvmf_tgt_poll_group_000", 00:12:26.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:26.976 "listen_address": { 00:12:26.976 "trtype": "TCP", 00:12:26.976 "adrfam": "IPv4", 00:12:26.976 "traddr": "10.0.0.3", 00:12:26.976 "trsvcid": "4420" 00:12:26.976 }, 00:12:26.976 "peer_address": { 00:12:26.976 "trtype": "TCP", 00:12:26.976 "adrfam": "IPv4", 00:12:26.976 "traddr": "10.0.0.1", 00:12:26.976 "trsvcid": "58012" 00:12:26.976 }, 00:12:26.976 "auth": { 00:12:26.976 "state": "completed", 00:12:26.976 "digest": "sha384", 00:12:26.976 "dhgroup": "ffdhe4096" 00:12:26.976 } 00:12:26.976 } 00:12:26.976 ]' 00:12:26.976 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.235 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:27.235 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.235 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:27.235 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.235 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.235 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.235 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.493 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:27.494 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:28.429 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.429 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:28.429 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.429 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.429 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.429 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.429 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:28.429 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:28.687 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:28.687 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.687 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:28.687 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:28.687 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:28.687 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.687 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.687 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.687 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.687 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.687 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.687 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.687 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.946 00:12:28.946 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.946 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.946 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.206 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.206 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.206 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.206 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.206 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.206 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.206 { 00:12:29.206 "cntlid": 75, 00:12:29.206 "qid": 0, 00:12:29.206 "state": "enabled", 00:12:29.206 "thread": "nvmf_tgt_poll_group_000", 00:12:29.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:29.206 "listen_address": { 00:12:29.206 "trtype": "TCP", 00:12:29.206 "adrfam": "IPv4", 00:12:29.206 "traddr": "10.0.0.3", 00:12:29.206 "trsvcid": "4420" 00:12:29.206 }, 00:12:29.206 "peer_address": { 00:12:29.206 "trtype": "TCP", 00:12:29.206 "adrfam": "IPv4", 00:12:29.206 "traddr": "10.0.0.1", 00:12:29.206 "trsvcid": "58042" 00:12:29.206 }, 00:12:29.206 "auth": { 00:12:29.206 "state": "completed", 00:12:29.206 "digest": "sha384", 00:12:29.206 "dhgroup": "ffdhe4096" 00:12:29.206 } 00:12:29.206 } 00:12:29.206 ]' 00:12:29.206 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.462 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:29.462 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.462 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:29.462 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.462 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.462 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.462 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.719 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:29.719 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:30.653 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.653 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:30.653 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.653 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.653 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.653 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.653 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:30.653 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:30.653 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:30.653 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.653 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:30.653 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:30.653 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:30.653 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.653 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.653 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.653 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.912 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.912 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.912 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.912 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.170 00:12:31.170 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.170 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.170 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.737 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.737 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.737 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.737 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.737 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.737 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.737 { 00:12:31.737 "cntlid": 77, 00:12:31.737 "qid": 0, 00:12:31.737 "state": "enabled", 00:12:31.737 "thread": "nvmf_tgt_poll_group_000", 00:12:31.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:31.737 "listen_address": { 00:12:31.737 "trtype": "TCP", 00:12:31.737 "adrfam": "IPv4", 00:12:31.737 "traddr": "10.0.0.3", 00:12:31.737 "trsvcid": "4420" 00:12:31.737 }, 00:12:31.737 "peer_address": { 00:12:31.737 "trtype": "TCP", 00:12:31.737 "adrfam": "IPv4", 00:12:31.737 "traddr": "10.0.0.1", 00:12:31.737 "trsvcid": "58058" 00:12:31.737 }, 00:12:31.737 "auth": { 00:12:31.737 "state": "completed", 00:12:31.737 "digest": "sha384", 00:12:31.737 "dhgroup": "ffdhe4096" 00:12:31.737 } 00:12:31.737 } 00:12:31.737 ]' 00:12:31.737 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.737 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.737 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.737 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:31.737 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.737 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.737 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.737 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.997 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:12:31.997 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:12:32.935 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.935 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:32.935 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.935 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.935 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.935 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.935 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:32.935 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:33.193 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:33.193 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.193 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:33.193 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:33.193 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:33.193 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.193 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:12:33.193 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.193 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.193 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.193 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:33.194 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.194 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.759 00:12:33.759 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.759 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.759 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.018 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.018 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.018 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.018 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.018 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.018 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.018 { 00:12:34.018 "cntlid": 79, 00:12:34.018 "qid": 0, 00:12:34.018 "state": "enabled", 00:12:34.018 "thread": "nvmf_tgt_poll_group_000", 00:12:34.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:34.018 "listen_address": { 00:12:34.018 "trtype": "TCP", 00:12:34.018 "adrfam": "IPv4", 00:12:34.018 "traddr": "10.0.0.3", 00:12:34.018 "trsvcid": "4420" 00:12:34.019 }, 00:12:34.019 "peer_address": { 00:12:34.019 "trtype": "TCP", 00:12:34.019 "adrfam": "IPv4", 00:12:34.019 "traddr": "10.0.0.1", 00:12:34.019 "trsvcid": "33242" 00:12:34.019 }, 00:12:34.019 "auth": { 00:12:34.019 "state": "completed", 00:12:34.019 "digest": "sha384", 00:12:34.019 "dhgroup": "ffdhe4096" 00:12:34.019 } 00:12:34.019 } 00:12:34.019 ]' 00:12:34.019 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.019 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.019 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.019 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:34.019 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.362 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.362 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.362 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.700 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:12:34.700 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:12:35.267 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.267 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:35.267 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.267 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.267 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.267 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:35.267 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.267 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:35.267 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:35.836 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:35.836 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.836 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:35.836 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:35.836 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:35.836 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.836 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.836 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.836 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.836 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.836 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.836 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.836 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.095 00:12:36.352 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.352 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.352 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.611 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.611 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.611 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.611 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.611 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.611 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.611 { 00:12:36.611 "cntlid": 81, 00:12:36.611 "qid": 0, 00:12:36.611 "state": "enabled", 00:12:36.611 "thread": "nvmf_tgt_poll_group_000", 00:12:36.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:36.611 "listen_address": { 00:12:36.611 "trtype": "TCP", 00:12:36.611 "adrfam": "IPv4", 00:12:36.611 "traddr": "10.0.0.3", 00:12:36.611 "trsvcid": "4420" 00:12:36.611 }, 00:12:36.611 "peer_address": { 00:12:36.611 "trtype": "TCP", 00:12:36.611 "adrfam": "IPv4", 00:12:36.611 "traddr": "10.0.0.1", 00:12:36.611 "trsvcid": "33254" 00:12:36.611 }, 00:12:36.611 "auth": { 00:12:36.611 "state": "completed", 00:12:36.611 "digest": "sha384", 00:12:36.611 "dhgroup": "ffdhe6144" 00:12:36.611 } 00:12:36.611 } 00:12:36.611 ]' 00:12:36.611 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.611 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.611 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.611 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:36.611 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.611 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.611 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.611 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.176 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:37.176 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.110 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.673 00:12:38.673 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.673 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.673 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.239 { 00:12:39.239 "cntlid": 83, 00:12:39.239 "qid": 0, 00:12:39.239 "state": "enabled", 00:12:39.239 "thread": "nvmf_tgt_poll_group_000", 00:12:39.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:39.239 "listen_address": { 00:12:39.239 "trtype": "TCP", 00:12:39.239 "adrfam": "IPv4", 00:12:39.239 "traddr": "10.0.0.3", 00:12:39.239 "trsvcid": "4420" 00:12:39.239 }, 00:12:39.239 "peer_address": { 00:12:39.239 "trtype": "TCP", 00:12:39.239 "adrfam": "IPv4", 00:12:39.239 "traddr": "10.0.0.1", 00:12:39.239 "trsvcid": "33290" 00:12:39.239 }, 00:12:39.239 "auth": { 00:12:39.239 "state": "completed", 00:12:39.239 "digest": "sha384", 00:12:39.239 "dhgroup": "ffdhe6144" 00:12:39.239 } 00:12:39.239 } 00:12:39.239 ]' 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.239 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.509 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:39.509 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:40.448 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.448 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:40.448 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.448 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.448 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.448 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.448 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:40.448 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:40.706 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:40.706 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.706 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:40.706 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:40.706 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:40.706 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.706 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.706 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.706 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.706 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.706 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.706 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.706 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.272 00:12:41.272 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.272 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.272 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.531 { 00:12:41.531 "cntlid": 85, 00:12:41.531 "qid": 0, 00:12:41.531 "state": "enabled", 00:12:41.531 "thread": "nvmf_tgt_poll_group_000", 00:12:41.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:41.531 "listen_address": { 00:12:41.531 "trtype": "TCP", 00:12:41.531 "adrfam": "IPv4", 00:12:41.531 "traddr": "10.0.0.3", 00:12:41.531 "trsvcid": "4420" 00:12:41.531 }, 00:12:41.531 "peer_address": { 00:12:41.531 "trtype": "TCP", 00:12:41.531 "adrfam": "IPv4", 00:12:41.531 "traddr": "10.0.0.1", 00:12:41.531 "trsvcid": "33328" 00:12:41.531 }, 00:12:41.531 "auth": { 00:12:41.531 "state": "completed", 00:12:41.531 "digest": "sha384", 00:12:41.531 "dhgroup": "ffdhe6144" 00:12:41.531 } 00:12:41.531 } 00:12:41.531 ]' 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.531 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.097 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:12:42.097 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:12:42.663 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.663 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:42.663 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.663 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.663 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.663 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.663 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:42.663 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:42.922 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:42.922 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.922 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:42.922 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:42.922 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:42.922 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.922 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:12:42.922 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.922 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.922 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.922 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:42.922 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:42.922 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:43.489 00:12:43.489 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.489 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.489 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.748 { 00:12:43.748 "cntlid": 87, 00:12:43.748 "qid": 0, 00:12:43.748 "state": "enabled", 00:12:43.748 "thread": "nvmf_tgt_poll_group_000", 00:12:43.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:43.748 "listen_address": { 00:12:43.748 "trtype": "TCP", 00:12:43.748 "adrfam": "IPv4", 00:12:43.748 "traddr": "10.0.0.3", 00:12:43.748 "trsvcid": "4420" 00:12:43.748 }, 00:12:43.748 "peer_address": { 00:12:43.748 "trtype": "TCP", 00:12:43.748 "adrfam": "IPv4", 00:12:43.748 "traddr": "10.0.0.1", 00:12:43.748 "trsvcid": "43652" 00:12:43.748 }, 00:12:43.748 "auth": { 00:12:43.748 "state": "completed", 00:12:43.748 "digest": "sha384", 00:12:43.748 "dhgroup": "ffdhe6144" 00:12:43.748 } 00:12:43.748 } 00:12:43.748 ]' 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.748 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.314 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:12:44.314 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:12:44.907 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.907 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:44.907 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.907 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.907 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.907 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:44.907 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.907 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:44.907 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:45.164 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:45.164 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.164 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:45.164 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:45.164 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:45.164 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.164 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.164 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.164 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.164 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.164 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.164 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.164 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.097 00:12:46.097 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.097 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.097 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.356 { 00:12:46.356 "cntlid": 89, 00:12:46.356 "qid": 0, 00:12:46.356 "state": "enabled", 00:12:46.356 "thread": "nvmf_tgt_poll_group_000", 00:12:46.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:46.356 "listen_address": { 00:12:46.356 "trtype": "TCP", 00:12:46.356 "adrfam": "IPv4", 00:12:46.356 "traddr": "10.0.0.3", 00:12:46.356 "trsvcid": "4420" 00:12:46.356 }, 00:12:46.356 "peer_address": { 00:12:46.356 "trtype": "TCP", 00:12:46.356 "adrfam": "IPv4", 00:12:46.356 "traddr": "10.0.0.1", 00:12:46.356 "trsvcid": "43674" 00:12:46.356 }, 00:12:46.356 "auth": { 00:12:46.356 "state": "completed", 00:12:46.356 "digest": "sha384", 00:12:46.356 "dhgroup": "ffdhe8192" 00:12:46.356 } 00:12:46.356 } 00:12:46.356 ]' 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.356 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.923 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:46.923 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:47.549 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.549 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:47.549 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.549 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.549 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.549 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.549 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:47.549 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:47.808 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:47.808 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.808 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:47.808 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:47.808 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:47.808 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.808 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.808 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.808 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.808 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.808 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.808 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.808 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.373 00:12:48.373 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.374 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.374 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.632 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.632 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.632 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.632 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.632 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.632 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.632 { 00:12:48.632 "cntlid": 91, 00:12:48.632 "qid": 0, 00:12:48.632 "state": "enabled", 00:12:48.632 "thread": "nvmf_tgt_poll_group_000", 00:12:48.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:48.632 "listen_address": { 00:12:48.632 "trtype": "TCP", 00:12:48.632 "adrfam": "IPv4", 00:12:48.632 "traddr": "10.0.0.3", 00:12:48.632 "trsvcid": "4420" 00:12:48.632 }, 00:12:48.632 "peer_address": { 00:12:48.632 "trtype": "TCP", 00:12:48.632 "adrfam": "IPv4", 00:12:48.632 "traddr": "10.0.0.1", 00:12:48.632 "trsvcid": "43718" 00:12:48.632 }, 00:12:48.632 "auth": { 00:12:48.632 "state": "completed", 00:12:48.632 "digest": "sha384", 00:12:48.632 "dhgroup": "ffdhe8192" 00:12:48.632 } 00:12:48.632 } 00:12:48.632 ]' 00:12:48.632 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.632 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:48.632 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.890 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:48.890 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.890 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.890 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.890 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.148 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:49.148 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:49.715 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.715 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:49.715 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.715 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.715 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.715 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.715 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:49.715 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:49.973 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:49.973 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.973 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:49.973 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:49.973 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:49.973 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.973 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.973 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.973 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.973 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.973 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.973 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.973 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.908 00:12:50.908 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.908 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.908 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.166 { 00:12:51.166 "cntlid": 93, 00:12:51.166 "qid": 0, 00:12:51.166 "state": "enabled", 00:12:51.166 "thread": "nvmf_tgt_poll_group_000", 00:12:51.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:51.166 "listen_address": { 00:12:51.166 "trtype": "TCP", 00:12:51.166 "adrfam": "IPv4", 00:12:51.166 "traddr": "10.0.0.3", 00:12:51.166 "trsvcid": "4420" 00:12:51.166 }, 00:12:51.166 "peer_address": { 00:12:51.166 "trtype": "TCP", 00:12:51.166 "adrfam": "IPv4", 00:12:51.166 "traddr": "10.0.0.1", 00:12:51.166 "trsvcid": "43756" 00:12:51.166 }, 00:12:51.166 "auth": { 00:12:51.166 "state": "completed", 00:12:51.166 "digest": "sha384", 00:12:51.166 "dhgroup": "ffdhe8192" 00:12:51.166 } 00:12:51.166 } 00:12:51.166 ]' 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.166 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.425 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:12:51.425 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:12:52.798 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.798 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:52.798 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.798 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.798 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.798 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.799 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:52.799 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:52.799 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:52.799 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.799 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:52.799 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:52.799 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:52.799 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.799 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:12:52.799 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.799 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.799 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.799 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:52.799 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:52.799 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:53.732 00:12:53.732 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.732 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.732 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.990 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.990 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.990 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.990 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.990 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.990 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.990 { 00:12:53.990 "cntlid": 95, 00:12:53.990 "qid": 0, 00:12:53.990 "state": "enabled", 00:12:53.990 "thread": "nvmf_tgt_poll_group_000", 00:12:53.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:53.990 "listen_address": { 00:12:53.990 "trtype": "TCP", 00:12:53.990 "adrfam": "IPv4", 00:12:53.990 "traddr": "10.0.0.3", 00:12:53.990 "trsvcid": "4420" 00:12:53.990 }, 00:12:53.990 "peer_address": { 00:12:53.990 "trtype": "TCP", 00:12:53.990 "adrfam": "IPv4", 00:12:53.990 "traddr": "10.0.0.1", 00:12:53.990 "trsvcid": "55732" 00:12:53.990 }, 00:12:53.990 "auth": { 00:12:53.990 "state": "completed", 00:12:53.990 "digest": "sha384", 00:12:53.990 "dhgroup": "ffdhe8192" 00:12:53.990 } 00:12:53.990 } 00:12:53.990 ]' 00:12:53.991 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.991 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:53.991 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.991 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:53.991 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.991 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.991 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.991 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.249 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:12:54.249 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:12:55.190 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.190 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:55.190 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.190 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.190 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.190 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:55.190 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:55.190 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.190 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:55.190 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:55.464 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:55.464 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.464 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:55.464 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:55.464 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:55.464 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.464 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.464 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.464 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.464 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.464 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.464 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.464 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.722 00:12:55.722 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.722 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.722 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.980 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.980 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.980 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.980 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.980 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.980 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.980 { 00:12:55.980 "cntlid": 97, 00:12:55.980 "qid": 0, 00:12:55.980 "state": "enabled", 00:12:55.980 "thread": "nvmf_tgt_poll_group_000", 00:12:55.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:55.980 "listen_address": { 00:12:55.980 "trtype": "TCP", 00:12:55.980 "adrfam": "IPv4", 00:12:55.980 "traddr": "10.0.0.3", 00:12:55.980 "trsvcid": "4420" 00:12:55.980 }, 00:12:55.980 "peer_address": { 00:12:55.980 "trtype": "TCP", 00:12:55.980 "adrfam": "IPv4", 00:12:55.980 "traddr": "10.0.0.1", 00:12:55.980 "trsvcid": "55776" 00:12:55.980 }, 00:12:55.980 "auth": { 00:12:55.980 "state": "completed", 00:12:55.980 "digest": "sha512", 00:12:55.980 "dhgroup": "null" 00:12:55.980 } 00:12:55.980 } 00:12:55.980 ]' 00:12:55.980 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.238 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.238 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.238 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:56.238 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.238 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.238 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.238 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.496 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:56.496 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:12:57.431 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.431 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:57.431 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.431 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.431 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.431 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.431 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:57.431 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:57.689 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:57.689 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.689 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:57.689 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:57.689 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:57.689 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.689 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.689 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.689 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.689 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.689 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.689 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.689 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.947 00:12:57.947 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.947 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.947 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.205 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.205 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.205 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.205 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.205 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.205 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.205 { 00:12:58.205 "cntlid": 99, 00:12:58.205 "qid": 0, 00:12:58.205 "state": "enabled", 00:12:58.205 "thread": "nvmf_tgt_poll_group_000", 00:12:58.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:12:58.205 "listen_address": { 00:12:58.205 "trtype": "TCP", 00:12:58.205 "adrfam": "IPv4", 00:12:58.205 "traddr": "10.0.0.3", 00:12:58.205 "trsvcid": "4420" 00:12:58.205 }, 00:12:58.205 "peer_address": { 00:12:58.205 "trtype": "TCP", 00:12:58.205 "adrfam": "IPv4", 00:12:58.205 "traddr": "10.0.0.1", 00:12:58.205 "trsvcid": "55808" 00:12:58.205 }, 00:12:58.205 "auth": { 00:12:58.205 "state": "completed", 00:12:58.205 "digest": "sha512", 00:12:58.205 "dhgroup": "null" 00:12:58.205 } 00:12:58.205 } 00:12:58.205 ]' 00:12:58.205 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.463 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.463 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.463 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:58.463 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.463 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.463 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.463 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.721 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:58.721 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:12:59.287 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.287 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:12:59.287 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.287 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.287 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.287 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.287 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:59.287 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:59.554 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:59.554 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.554 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:59.554 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:59.554 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:59.554 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.554 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.554 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.554 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.554 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.554 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.554 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.554 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.132 00:13:00.132 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.132 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.132 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.390 { 00:13:00.390 "cntlid": 101, 00:13:00.390 "qid": 0, 00:13:00.390 "state": "enabled", 00:13:00.390 "thread": "nvmf_tgt_poll_group_000", 00:13:00.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:00.390 "listen_address": { 00:13:00.390 "trtype": "TCP", 00:13:00.390 "adrfam": "IPv4", 00:13:00.390 "traddr": "10.0.0.3", 00:13:00.390 "trsvcid": "4420" 00:13:00.390 }, 00:13:00.390 "peer_address": { 00:13:00.390 "trtype": "TCP", 00:13:00.390 "adrfam": "IPv4", 00:13:00.390 "traddr": "10.0.0.1", 00:13:00.390 "trsvcid": "55832" 00:13:00.390 }, 00:13:00.390 "auth": { 00:13:00.390 "state": "completed", 00:13:00.390 "digest": "sha512", 00:13:00.390 "dhgroup": "null" 00:13:00.390 } 00:13:00.390 } 00:13:00.390 ]' 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.390 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.737 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:13:00.738 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:13:01.304 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.304 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:01.304 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.304 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.304 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.304 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.304 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:01.304 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:01.562 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:01.562 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.562 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:01.562 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:01.562 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:01.562 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.562 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:13:01.562 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.562 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.562 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.562 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:01.562 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:01.562 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.128 00:13:02.128 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.128 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.128 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.386 { 00:13:02.386 "cntlid": 103, 00:13:02.386 "qid": 0, 00:13:02.386 "state": "enabled", 00:13:02.386 "thread": "nvmf_tgt_poll_group_000", 00:13:02.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:02.386 "listen_address": { 00:13:02.386 "trtype": "TCP", 00:13:02.386 "adrfam": "IPv4", 00:13:02.386 "traddr": "10.0.0.3", 00:13:02.386 "trsvcid": "4420" 00:13:02.386 }, 00:13:02.386 "peer_address": { 00:13:02.386 "trtype": "TCP", 00:13:02.386 "adrfam": "IPv4", 00:13:02.386 "traddr": "10.0.0.1", 00:13:02.386 "trsvcid": "37850" 00:13:02.386 }, 00:13:02.386 "auth": { 00:13:02.386 "state": "completed", 00:13:02.386 "digest": "sha512", 00:13:02.386 "dhgroup": "null" 00:13:02.386 } 00:13:02.386 } 00:13:02.386 ]' 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.386 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.952 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:02.952 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:03.518 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.518 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:03.518 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.518 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.518 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.518 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:03.518 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.518 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:03.518 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:03.840 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:03.840 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.840 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.840 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:03.840 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:03.840 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.840 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.840 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.840 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.840 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.840 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.840 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.840 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.109 00:13:04.109 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.109 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.109 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.367 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.367 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.367 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.367 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.367 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.367 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.367 { 00:13:04.367 "cntlid": 105, 00:13:04.367 "qid": 0, 00:13:04.367 "state": "enabled", 00:13:04.367 "thread": "nvmf_tgt_poll_group_000", 00:13:04.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:04.367 "listen_address": { 00:13:04.367 "trtype": "TCP", 00:13:04.367 "adrfam": "IPv4", 00:13:04.367 "traddr": "10.0.0.3", 00:13:04.367 "trsvcid": "4420" 00:13:04.367 }, 00:13:04.367 "peer_address": { 00:13:04.367 "trtype": "TCP", 00:13:04.367 "adrfam": "IPv4", 00:13:04.367 "traddr": "10.0.0.1", 00:13:04.367 "trsvcid": "37870" 00:13:04.367 }, 00:13:04.367 "auth": { 00:13:04.367 "state": "completed", 00:13:04.367 "digest": "sha512", 00:13:04.367 "dhgroup": "ffdhe2048" 00:13:04.367 } 00:13:04.367 } 00:13:04.367 ]' 00:13:04.367 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.367 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.367 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.367 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:04.367 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.624 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.624 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.624 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.883 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:13:04.883 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:13:05.448 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.448 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:05.448 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.448 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.448 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.448 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.448 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:05.448 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:05.706 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:05.706 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.706 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:05.706 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:05.706 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:05.706 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.706 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.706 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.706 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.706 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.706 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.706 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.706 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.271 00:13:06.271 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.272 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.272 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.530 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.530 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.530 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.530 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.530 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.530 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.530 { 00:13:06.530 "cntlid": 107, 00:13:06.530 "qid": 0, 00:13:06.530 "state": "enabled", 00:13:06.530 "thread": "nvmf_tgt_poll_group_000", 00:13:06.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:06.530 "listen_address": { 00:13:06.530 "trtype": "TCP", 00:13:06.530 "adrfam": "IPv4", 00:13:06.530 "traddr": "10.0.0.3", 00:13:06.530 "trsvcid": "4420" 00:13:06.530 }, 00:13:06.530 "peer_address": { 00:13:06.530 "trtype": "TCP", 00:13:06.530 "adrfam": "IPv4", 00:13:06.530 "traddr": "10.0.0.1", 00:13:06.530 "trsvcid": "37916" 00:13:06.530 }, 00:13:06.530 "auth": { 00:13:06.530 "state": "completed", 00:13:06.530 "digest": "sha512", 00:13:06.530 "dhgroup": "ffdhe2048" 00:13:06.530 } 00:13:06.530 } 00:13:06.530 ]' 00:13:06.530 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.530 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.530 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.530 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:06.530 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.530 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.530 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.530 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.095 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:13:07.095 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:13:07.661 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.661 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:07.661 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.661 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.661 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.661 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.661 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:07.661 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:07.962 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:07.962 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.962 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:07.962 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:07.962 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:07.962 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.962 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.962 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.962 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.962 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.962 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.962 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.962 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.253 00:13:08.253 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.253 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.253 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.512 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.512 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.512 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.512 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.769 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.769 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.769 { 00:13:08.769 "cntlid": 109, 00:13:08.769 "qid": 0, 00:13:08.769 "state": "enabled", 00:13:08.769 "thread": "nvmf_tgt_poll_group_000", 00:13:08.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:08.769 "listen_address": { 00:13:08.769 "trtype": "TCP", 00:13:08.769 "adrfam": "IPv4", 00:13:08.769 "traddr": "10.0.0.3", 00:13:08.769 "trsvcid": "4420" 00:13:08.769 }, 00:13:08.769 "peer_address": { 00:13:08.769 "trtype": "TCP", 00:13:08.769 "adrfam": "IPv4", 00:13:08.769 "traddr": "10.0.0.1", 00:13:08.769 "trsvcid": "37930" 00:13:08.769 }, 00:13:08.769 "auth": { 00:13:08.769 "state": "completed", 00:13:08.769 "digest": "sha512", 00:13:08.769 "dhgroup": "ffdhe2048" 00:13:08.769 } 00:13:08.769 } 00:13:08.769 ]' 00:13:08.769 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.769 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.769 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.769 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:08.769 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.769 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.769 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.769 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.027 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:13:09.027 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:13:09.966 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.966 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:09.966 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.966 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.966 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.966 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.966 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:09.966 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:10.224 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:10.224 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.224 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:10.224 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:10.224 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:10.224 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.224 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:13:10.224 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.224 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.224 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.224 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:10.224 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:10.224 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:10.482 00:13:10.482 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.482 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.482 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.739 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.739 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.739 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.739 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.739 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.739 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.739 { 00:13:10.739 "cntlid": 111, 00:13:10.739 "qid": 0, 00:13:10.739 "state": "enabled", 00:13:10.739 "thread": "nvmf_tgt_poll_group_000", 00:13:10.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:10.740 "listen_address": { 00:13:10.740 "trtype": "TCP", 00:13:10.740 "adrfam": "IPv4", 00:13:10.740 "traddr": "10.0.0.3", 00:13:10.740 "trsvcid": "4420" 00:13:10.740 }, 00:13:10.740 "peer_address": { 00:13:10.740 "trtype": "TCP", 00:13:10.740 "adrfam": "IPv4", 00:13:10.740 "traddr": "10.0.0.1", 00:13:10.740 "trsvcid": "37952" 00:13:10.740 }, 00:13:10.740 "auth": { 00:13:10.740 "state": "completed", 00:13:10.740 "digest": "sha512", 00:13:10.740 "dhgroup": "ffdhe2048" 00:13:10.740 } 00:13:10.740 } 00:13:10.740 ]' 00:13:10.740 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.997 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.997 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.997 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:10.997 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.997 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.997 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.997 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.254 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:11.254 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:11.891 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.891 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:11.891 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.891 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.891 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.891 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:11.891 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.891 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:11.891 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:12.162 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:12.162 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.162 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:12.162 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:12.162 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:12.162 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.162 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.162 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.162 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.162 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.162 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.162 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.162 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.727 00:13:12.727 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.727 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.727 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.984 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.984 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.984 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.984 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.984 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.984 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.984 { 00:13:12.984 "cntlid": 113, 00:13:12.984 "qid": 0, 00:13:12.984 "state": "enabled", 00:13:12.984 "thread": "nvmf_tgt_poll_group_000", 00:13:12.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:12.984 "listen_address": { 00:13:12.984 "trtype": "TCP", 00:13:12.984 "adrfam": "IPv4", 00:13:12.984 "traddr": "10.0.0.3", 00:13:12.984 "trsvcid": "4420" 00:13:12.984 }, 00:13:12.984 "peer_address": { 00:13:12.984 "trtype": "TCP", 00:13:12.984 "adrfam": "IPv4", 00:13:12.984 "traddr": "10.0.0.1", 00:13:12.984 "trsvcid": "38410" 00:13:12.984 }, 00:13:12.984 "auth": { 00:13:12.984 "state": "completed", 00:13:12.984 "digest": "sha512", 00:13:12.984 "dhgroup": "ffdhe3072" 00:13:12.984 } 00:13:12.984 } 00:13:12.984 ]' 00:13:12.984 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.984 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.984 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.984 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:12.984 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.243 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.243 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.243 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.502 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:13:13.502 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:13:14.069 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.069 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:14.069 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.069 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.069 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.069 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.069 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:14.069 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:14.328 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:14.328 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.328 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:14.328 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:14.328 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:14.328 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.328 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.328 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.328 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.328 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.328 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.328 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.328 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.893 00:13:14.893 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.893 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.893 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.150 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.150 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.151 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.151 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.151 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.151 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.151 { 00:13:15.151 "cntlid": 115, 00:13:15.151 "qid": 0, 00:13:15.151 "state": "enabled", 00:13:15.151 "thread": "nvmf_tgt_poll_group_000", 00:13:15.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:15.151 "listen_address": { 00:13:15.151 "trtype": "TCP", 00:13:15.151 "adrfam": "IPv4", 00:13:15.151 "traddr": "10.0.0.3", 00:13:15.151 "trsvcid": "4420" 00:13:15.151 }, 00:13:15.151 "peer_address": { 00:13:15.151 "trtype": "TCP", 00:13:15.151 "adrfam": "IPv4", 00:13:15.151 "traddr": "10.0.0.1", 00:13:15.151 "trsvcid": "38438" 00:13:15.151 }, 00:13:15.151 "auth": { 00:13:15.151 "state": "completed", 00:13:15.151 "digest": "sha512", 00:13:15.151 "dhgroup": "ffdhe3072" 00:13:15.151 } 00:13:15.151 } 00:13:15.151 ]' 00:13:15.151 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.151 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.151 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.151 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:15.151 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.409 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.409 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.409 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.667 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:13:15.667 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:13:16.292 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.292 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:16.292 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.292 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.292 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.292 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.292 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:16.292 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:16.551 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:16.551 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.551 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:16.551 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:16.551 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:16.551 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.551 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.551 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.551 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.551 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.551 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.551 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.551 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.117 00:13:17.117 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.117 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.117 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.375 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.375 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.375 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.375 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.375 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.375 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.375 { 00:13:17.375 "cntlid": 117, 00:13:17.375 "qid": 0, 00:13:17.375 "state": "enabled", 00:13:17.375 "thread": "nvmf_tgt_poll_group_000", 00:13:17.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:17.375 "listen_address": { 00:13:17.375 "trtype": "TCP", 00:13:17.375 "adrfam": "IPv4", 00:13:17.375 "traddr": "10.0.0.3", 00:13:17.375 "trsvcid": "4420" 00:13:17.375 }, 00:13:17.375 "peer_address": { 00:13:17.375 "trtype": "TCP", 00:13:17.375 "adrfam": "IPv4", 00:13:17.375 "traddr": "10.0.0.1", 00:13:17.375 "trsvcid": "38460" 00:13:17.375 }, 00:13:17.375 "auth": { 00:13:17.375 "state": "completed", 00:13:17.375 "digest": "sha512", 00:13:17.375 "dhgroup": "ffdhe3072" 00:13:17.375 } 00:13:17.375 } 00:13:17.375 ]' 00:13:17.375 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.375 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.375 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.375 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:17.375 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.633 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.633 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.633 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.890 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:13:17.890 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:13:18.823 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.823 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:18.823 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.823 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.823 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.823 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.823 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:18.823 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:19.082 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:19.082 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.082 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:19.082 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:19.082 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:19.082 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.082 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:13:19.082 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.082 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.082 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.082 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:19.082 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:19.082 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:19.341 00:13:19.341 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.341 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.341 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.599 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.599 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.599 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.599 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.599 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.599 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.599 { 00:13:19.599 "cntlid": 119, 00:13:19.599 "qid": 0, 00:13:19.599 "state": "enabled", 00:13:19.599 "thread": "nvmf_tgt_poll_group_000", 00:13:19.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:19.599 "listen_address": { 00:13:19.599 "trtype": "TCP", 00:13:19.599 "adrfam": "IPv4", 00:13:19.599 "traddr": "10.0.0.3", 00:13:19.599 "trsvcid": "4420" 00:13:19.599 }, 00:13:19.599 "peer_address": { 00:13:19.599 "trtype": "TCP", 00:13:19.599 "adrfam": "IPv4", 00:13:19.599 "traddr": "10.0.0.1", 00:13:19.599 "trsvcid": "38482" 00:13:19.599 }, 00:13:19.599 "auth": { 00:13:19.599 "state": "completed", 00:13:19.599 "digest": "sha512", 00:13:19.599 "dhgroup": "ffdhe3072" 00:13:19.599 } 00:13:19.599 } 00:13:19.599 ]' 00:13:19.599 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.599 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.599 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.600 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:19.600 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.859 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.859 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.859 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.116 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:20.116 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:20.683 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.683 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:20.683 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.683 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.683 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.683 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.683 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.683 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:20.683 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:20.942 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:20.942 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.942 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:20.942 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:20.942 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:20.942 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.942 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.942 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.942 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.942 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.942 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.942 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.942 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.509 00:13:21.509 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.509 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.509 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.767 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.767 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.767 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.767 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.767 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.767 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.767 { 00:13:21.767 "cntlid": 121, 00:13:21.767 "qid": 0, 00:13:21.767 "state": "enabled", 00:13:21.767 "thread": "nvmf_tgt_poll_group_000", 00:13:21.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:21.767 "listen_address": { 00:13:21.767 "trtype": "TCP", 00:13:21.767 "adrfam": "IPv4", 00:13:21.767 "traddr": "10.0.0.3", 00:13:21.767 "trsvcid": "4420" 00:13:21.767 }, 00:13:21.767 "peer_address": { 00:13:21.767 "trtype": "TCP", 00:13:21.767 "adrfam": "IPv4", 00:13:21.767 "traddr": "10.0.0.1", 00:13:21.767 "trsvcid": "38502" 00:13:21.767 }, 00:13:21.767 "auth": { 00:13:21.767 "state": "completed", 00:13:21.767 "digest": "sha512", 00:13:21.767 "dhgroup": "ffdhe4096" 00:13:21.767 } 00:13:21.767 } 00:13:21.767 ]' 00:13:21.767 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.767 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.767 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.767 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:21.767 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.025 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.026 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.026 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.284 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:13:22.284 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:13:22.852 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.852 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:22.852 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.852 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.852 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.852 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.852 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:22.852 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:23.111 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:23.111 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.111 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:23.111 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:23.111 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:23.111 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.111 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.111 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.111 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.111 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.111 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.111 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.111 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.679 00:13:23.679 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.679 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.679 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.937 { 00:13:23.937 "cntlid": 123, 00:13:23.937 "qid": 0, 00:13:23.937 "state": "enabled", 00:13:23.937 "thread": "nvmf_tgt_poll_group_000", 00:13:23.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:23.937 "listen_address": { 00:13:23.937 "trtype": "TCP", 00:13:23.937 "adrfam": "IPv4", 00:13:23.937 "traddr": "10.0.0.3", 00:13:23.937 "trsvcid": "4420" 00:13:23.937 }, 00:13:23.937 "peer_address": { 00:13:23.937 "trtype": "TCP", 00:13:23.937 "adrfam": "IPv4", 00:13:23.937 "traddr": "10.0.0.1", 00:13:23.937 "trsvcid": "45460" 00:13:23.937 }, 00:13:23.937 "auth": { 00:13:23.937 "state": "completed", 00:13:23.937 "digest": "sha512", 00:13:23.937 "dhgroup": "ffdhe4096" 00:13:23.937 } 00:13:23.937 } 00:13:23.937 ]' 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.937 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.503 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:13:24.503 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:13:25.070 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.070 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:25.070 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.070 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.070 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.070 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.070 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:25.070 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:25.328 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:25.328 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.328 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:25.328 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:25.328 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:25.328 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.328 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.328 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.328 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.328 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.328 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.328 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.328 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.587 00:13:25.587 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.587 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.587 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.154 { 00:13:26.154 "cntlid": 125, 00:13:26.154 "qid": 0, 00:13:26.154 "state": "enabled", 00:13:26.154 "thread": "nvmf_tgt_poll_group_000", 00:13:26.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:26.154 "listen_address": { 00:13:26.154 "trtype": "TCP", 00:13:26.154 "adrfam": "IPv4", 00:13:26.154 "traddr": "10.0.0.3", 00:13:26.154 "trsvcid": "4420" 00:13:26.154 }, 00:13:26.154 "peer_address": { 00:13:26.154 "trtype": "TCP", 00:13:26.154 "adrfam": "IPv4", 00:13:26.154 "traddr": "10.0.0.1", 00:13:26.154 "trsvcid": "45494" 00:13:26.154 }, 00:13:26.154 "auth": { 00:13:26.154 "state": "completed", 00:13:26.154 "digest": "sha512", 00:13:26.154 "dhgroup": "ffdhe4096" 00:13:26.154 } 00:13:26.154 } 00:13:26.154 ]' 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.154 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.412 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:13:26.412 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:13:27.347 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.347 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:27.347 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.347 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.347 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.347 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.347 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:27.347 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:27.605 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:27.605 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.605 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:27.605 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:27.605 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:27.605 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.605 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:13:27.605 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.605 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.605 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.605 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:27.605 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.605 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:28.171 00:13:28.171 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.171 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.171 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.429 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.429 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.429 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.429 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.429 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.429 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.429 { 00:13:28.429 "cntlid": 127, 00:13:28.429 "qid": 0, 00:13:28.429 "state": "enabled", 00:13:28.429 "thread": "nvmf_tgt_poll_group_000", 00:13:28.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:28.429 "listen_address": { 00:13:28.429 "trtype": "TCP", 00:13:28.429 "adrfam": "IPv4", 00:13:28.429 "traddr": "10.0.0.3", 00:13:28.429 "trsvcid": "4420" 00:13:28.429 }, 00:13:28.429 "peer_address": { 00:13:28.429 "trtype": "TCP", 00:13:28.429 "adrfam": "IPv4", 00:13:28.429 "traddr": "10.0.0.1", 00:13:28.429 "trsvcid": "45514" 00:13:28.429 }, 00:13:28.429 "auth": { 00:13:28.429 "state": "completed", 00:13:28.429 "digest": "sha512", 00:13:28.429 "dhgroup": "ffdhe4096" 00:13:28.429 } 00:13:28.429 } 00:13:28.429 ]' 00:13:28.429 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.429 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:28.429 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.687 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:28.687 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.687 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.687 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.687 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.944 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:28.944 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:29.877 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.877 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:29.877 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.877 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.877 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.877 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:29.877 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.877 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:29.877 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:29.877 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:29.878 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.878 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:29.878 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:29.878 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:29.878 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.878 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.878 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.878 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.136 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.136 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.136 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.136 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.393 00:13:30.393 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.393 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.393 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.959 { 00:13:30.959 "cntlid": 129, 00:13:30.959 "qid": 0, 00:13:30.959 "state": "enabled", 00:13:30.959 "thread": "nvmf_tgt_poll_group_000", 00:13:30.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:30.959 "listen_address": { 00:13:30.959 "trtype": "TCP", 00:13:30.959 "adrfam": "IPv4", 00:13:30.959 "traddr": "10.0.0.3", 00:13:30.959 "trsvcid": "4420" 00:13:30.959 }, 00:13:30.959 "peer_address": { 00:13:30.959 "trtype": "TCP", 00:13:30.959 "adrfam": "IPv4", 00:13:30.959 "traddr": "10.0.0.1", 00:13:30.959 "trsvcid": "45534" 00:13:30.959 }, 00:13:30.959 "auth": { 00:13:30.959 "state": "completed", 00:13:30.959 "digest": "sha512", 00:13:30.959 "dhgroup": "ffdhe6144" 00:13:30.959 } 00:13:30.959 } 00:13:30.959 ]' 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.959 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.219 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:13:31.219 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:13:32.154 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.154 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:32.154 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.154 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.154 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.154 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.154 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:32.154 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:32.412 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:32.412 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.412 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:32.412 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:32.412 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:32.412 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.412 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.412 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.412 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.412 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.412 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.412 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.412 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.978 00:13:32.978 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.978 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.978 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.236 { 00:13:33.236 "cntlid": 131, 00:13:33.236 "qid": 0, 00:13:33.236 "state": "enabled", 00:13:33.236 "thread": "nvmf_tgt_poll_group_000", 00:13:33.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:33.236 "listen_address": { 00:13:33.236 "trtype": "TCP", 00:13:33.236 "adrfam": "IPv4", 00:13:33.236 "traddr": "10.0.0.3", 00:13:33.236 "trsvcid": "4420" 00:13:33.236 }, 00:13:33.236 "peer_address": { 00:13:33.236 "trtype": "TCP", 00:13:33.236 "adrfam": "IPv4", 00:13:33.236 "traddr": "10.0.0.1", 00:13:33.236 "trsvcid": "42116" 00:13:33.236 }, 00:13:33.236 "auth": { 00:13:33.236 "state": "completed", 00:13:33.236 "digest": "sha512", 00:13:33.236 "dhgroup": "ffdhe6144" 00:13:33.236 } 00:13:33.236 } 00:13:33.236 ]' 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.236 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.496 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:13:33.496 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:13:34.431 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.431 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:34.431 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.431 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.431 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.431 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.431 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:34.431 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:34.689 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:34.689 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.689 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:34.689 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:34.689 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:34.689 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.689 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.689 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.689 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.689 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.689 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.689 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.690 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.255 00:13:35.255 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.255 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.255 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.255 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.255 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.255 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.255 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.255 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.255 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.255 { 00:13:35.255 "cntlid": 133, 00:13:35.255 "qid": 0, 00:13:35.255 "state": "enabled", 00:13:35.255 "thread": "nvmf_tgt_poll_group_000", 00:13:35.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:35.255 "listen_address": { 00:13:35.255 "trtype": "TCP", 00:13:35.255 "adrfam": "IPv4", 00:13:35.255 "traddr": "10.0.0.3", 00:13:35.255 "trsvcid": "4420" 00:13:35.255 }, 00:13:35.255 "peer_address": { 00:13:35.255 "trtype": "TCP", 00:13:35.255 "adrfam": "IPv4", 00:13:35.255 "traddr": "10.0.0.1", 00:13:35.255 "trsvcid": "42160" 00:13:35.255 }, 00:13:35.255 "auth": { 00:13:35.255 "state": "completed", 00:13:35.255 "digest": "sha512", 00:13:35.255 "dhgroup": "ffdhe6144" 00:13:35.255 } 00:13:35.255 } 00:13:35.255 ]' 00:13:35.255 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.513 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.513 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.513 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:35.513 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.513 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.513 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.513 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.770 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:13:35.770 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:13:36.335 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.335 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:36.335 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.335 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.335 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.335 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.335 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:36.335 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:36.593 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:36.593 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.593 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:36.593 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:36.593 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:36.593 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.593 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:13:36.593 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.593 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.593 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.593 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:36.593 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:36.593 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.160 00:13:37.160 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.160 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.160 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.418 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.418 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.418 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.418 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.418 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.418 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.418 { 00:13:37.418 "cntlid": 135, 00:13:37.418 "qid": 0, 00:13:37.418 "state": "enabled", 00:13:37.418 "thread": "nvmf_tgt_poll_group_000", 00:13:37.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:37.418 "listen_address": { 00:13:37.418 "trtype": "TCP", 00:13:37.418 "adrfam": "IPv4", 00:13:37.418 "traddr": "10.0.0.3", 00:13:37.418 "trsvcid": "4420" 00:13:37.418 }, 00:13:37.418 "peer_address": { 00:13:37.418 "trtype": "TCP", 00:13:37.418 "adrfam": "IPv4", 00:13:37.418 "traddr": "10.0.0.1", 00:13:37.418 "trsvcid": "42194" 00:13:37.418 }, 00:13:37.418 "auth": { 00:13:37.418 "state": "completed", 00:13:37.418 "digest": "sha512", 00:13:37.418 "dhgroup": "ffdhe6144" 00:13:37.418 } 00:13:37.418 } 00:13:37.418 ]' 00:13:37.418 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.675 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.675 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.675 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:37.675 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.675 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.675 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.675 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.932 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:37.932 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:38.496 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.497 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:38.497 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.497 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.497 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.497 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:38.497 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.497 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:38.497 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:38.755 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:38.755 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.755 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:38.755 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:38.755 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:39.017 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.017 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.017 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.017 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.017 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.017 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.017 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.017 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.585 00:13:39.585 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.585 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.585 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.845 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.845 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.845 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.845 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.845 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.845 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.845 { 00:13:39.845 "cntlid": 137, 00:13:39.845 "qid": 0, 00:13:39.845 "state": "enabled", 00:13:39.845 "thread": "nvmf_tgt_poll_group_000", 00:13:39.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:39.845 "listen_address": { 00:13:39.845 "trtype": "TCP", 00:13:39.845 "adrfam": "IPv4", 00:13:39.845 "traddr": "10.0.0.3", 00:13:39.845 "trsvcid": "4420" 00:13:39.845 }, 00:13:39.845 "peer_address": { 00:13:39.845 "trtype": "TCP", 00:13:39.845 "adrfam": "IPv4", 00:13:39.845 "traddr": "10.0.0.1", 00:13:39.845 "trsvcid": "42222" 00:13:39.845 }, 00:13:39.845 "auth": { 00:13:39.845 "state": "completed", 00:13:39.845 "digest": "sha512", 00:13:39.845 "dhgroup": "ffdhe8192" 00:13:39.845 } 00:13:39.845 } 00:13:39.845 ]' 00:13:39.845 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.845 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.845 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.845 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:39.845 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.103 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.103 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.103 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.361 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:13:40.362 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:13:40.928 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.928 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:40.928 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.928 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.928 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.928 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.928 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:40.928 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:41.186 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:41.186 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.186 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:41.186 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:41.186 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:41.186 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.186 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.186 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.186 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.186 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.186 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.186 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.186 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.119 00:13:42.119 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.119 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.119 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.377 { 00:13:42.377 "cntlid": 139, 00:13:42.377 "qid": 0, 00:13:42.377 "state": "enabled", 00:13:42.377 "thread": "nvmf_tgt_poll_group_000", 00:13:42.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:42.377 "listen_address": { 00:13:42.377 "trtype": "TCP", 00:13:42.377 "adrfam": "IPv4", 00:13:42.377 "traddr": "10.0.0.3", 00:13:42.377 "trsvcid": "4420" 00:13:42.377 }, 00:13:42.377 "peer_address": { 00:13:42.377 "trtype": "TCP", 00:13:42.377 "adrfam": "IPv4", 00:13:42.377 "traddr": "10.0.0.1", 00:13:42.377 "trsvcid": "44738" 00:13:42.377 }, 00:13:42.377 "auth": { 00:13:42.377 "state": "completed", 00:13:42.377 "digest": "sha512", 00:13:42.377 "dhgroup": "ffdhe8192" 00:13:42.377 } 00:13:42.377 } 00:13:42.377 ]' 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.377 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.635 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:13:42.635 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: --dhchap-ctrl-secret DHHC-1:02:ZWQ3OWVjZWNmNTI2NjYxMTAwMjBiZDkxODI2MzRkNGU4MWJhODhiZWRjN2RlZmZmtbP5hw==: 00:13:43.568 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.568 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:43.568 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.568 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.568 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.568 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.568 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:43.568 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:43.826 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:43.826 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.826 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:43.826 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:43.826 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:43.826 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.826 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.826 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.826 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.826 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.826 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.826 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.826 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.392 00:13:44.392 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.392 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.392 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.648 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.648 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.648 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.648 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.648 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.648 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.648 { 00:13:44.648 "cntlid": 141, 00:13:44.648 "qid": 0, 00:13:44.648 "state": "enabled", 00:13:44.648 "thread": "nvmf_tgt_poll_group_000", 00:13:44.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:44.648 "listen_address": { 00:13:44.648 "trtype": "TCP", 00:13:44.648 "adrfam": "IPv4", 00:13:44.648 "traddr": "10.0.0.3", 00:13:44.648 "trsvcid": "4420" 00:13:44.648 }, 00:13:44.648 "peer_address": { 00:13:44.648 "trtype": "TCP", 00:13:44.648 "adrfam": "IPv4", 00:13:44.648 "traddr": "10.0.0.1", 00:13:44.648 "trsvcid": "44778" 00:13:44.648 }, 00:13:44.648 "auth": { 00:13:44.648 "state": "completed", 00:13:44.648 "digest": "sha512", 00:13:44.648 "dhgroup": "ffdhe8192" 00:13:44.648 } 00:13:44.648 } 00:13:44.648 ]' 00:13:44.648 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.904 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:44.904 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.904 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:44.904 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.904 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.904 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.904 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.161 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:13:45.161 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:01:YTk1ZGJlZTNjMGI0YzAwMWI0YjFmZGM1YTE0NzJiMjjdOHir: 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:46.093 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.094 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:47.027 00:13:47.027 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.027 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.027 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.027 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.027 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.027 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.027 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.027 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.027 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.027 { 00:13:47.027 "cntlid": 143, 00:13:47.027 "qid": 0, 00:13:47.027 "state": "enabled", 00:13:47.027 "thread": "nvmf_tgt_poll_group_000", 00:13:47.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:47.027 "listen_address": { 00:13:47.027 "trtype": "TCP", 00:13:47.027 "adrfam": "IPv4", 00:13:47.027 "traddr": "10.0.0.3", 00:13:47.027 "trsvcid": "4420" 00:13:47.027 }, 00:13:47.027 "peer_address": { 00:13:47.027 "trtype": "TCP", 00:13:47.027 "adrfam": "IPv4", 00:13:47.027 "traddr": "10.0.0.1", 00:13:47.027 "trsvcid": "44798" 00:13:47.027 }, 00:13:47.027 "auth": { 00:13:47.027 "state": "completed", 00:13:47.027 "digest": "sha512", 00:13:47.027 "dhgroup": "ffdhe8192" 00:13:47.027 } 00:13:47.027 } 00:13:47.027 ]' 00:13:47.027 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.285 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:47.285 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.285 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:47.285 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.285 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.285 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.285 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.542 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:47.542 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:48.480 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.480 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:48.480 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.480 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.480 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.480 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:48.480 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:48.480 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:48.480 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:48.480 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:48.480 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:48.738 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:48.738 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.738 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:48.738 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:48.738 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:48.738 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.738 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.738 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.738 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.738 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.738 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.738 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.738 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.303 00:13:49.303 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.303 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.303 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.573 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.573 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.573 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.573 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.848 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.848 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.848 { 00:13:49.848 "cntlid": 145, 00:13:49.848 "qid": 0, 00:13:49.848 "state": "enabled", 00:13:49.848 "thread": "nvmf_tgt_poll_group_000", 00:13:49.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:49.848 "listen_address": { 00:13:49.848 "trtype": "TCP", 00:13:49.848 "adrfam": "IPv4", 00:13:49.848 "traddr": "10.0.0.3", 00:13:49.848 "trsvcid": "4420" 00:13:49.848 }, 00:13:49.848 "peer_address": { 00:13:49.848 "trtype": "TCP", 00:13:49.848 "adrfam": "IPv4", 00:13:49.848 "traddr": "10.0.0.1", 00:13:49.848 "trsvcid": "44820" 00:13:49.848 }, 00:13:49.848 "auth": { 00:13:49.848 "state": "completed", 00:13:49.848 "digest": "sha512", 00:13:49.848 "dhgroup": "ffdhe8192" 00:13:49.848 } 00:13:49.848 } 00:13:49.848 ]' 00:13:49.848 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.848 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:49.848 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.848 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:49.848 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.849 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.849 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.849 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.107 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:13:50.107 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:00:MDk2ZjExY2U2YjZjMDI1ZDU0ODFiZDU0ZDZkOTgwMTg4MmI4MjU0Y2YwMDUwMjBmOUtWqw==: --dhchap-ctrl-secret DHHC-1:03:NzIxNTBkM2RkOTE5N2YyNDU2ZmM2ZjUwYjE2NmMxMTdhZWQzODdhNDEyNzJhOTM2OTVjNjU4MDhiMWUyYzFiYxRhOcU=: 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:51.041 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:51.042 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:51.607 request: 00:13:51.607 { 00:13:51.607 "name": "nvme0", 00:13:51.607 "trtype": "tcp", 00:13:51.607 "traddr": "10.0.0.3", 00:13:51.607 "adrfam": "ipv4", 00:13:51.607 "trsvcid": "4420", 00:13:51.607 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:51.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:51.607 "prchk_reftag": false, 00:13:51.607 "prchk_guard": false, 00:13:51.607 "hdgst": false, 00:13:51.607 "ddgst": false, 00:13:51.607 "dhchap_key": "key2", 00:13:51.607 "allow_unrecognized_csi": false, 00:13:51.607 "method": "bdev_nvme_attach_controller", 00:13:51.607 "req_id": 1 00:13:51.607 } 00:13:51.607 Got JSON-RPC error response 00:13:51.607 response: 00:13:51.607 { 00:13:51.607 "code": -5, 00:13:51.607 "message": "Input/output error" 00:13:51.607 } 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:51.607 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:52.173 request: 00:13:52.173 { 00:13:52.173 "name": "nvme0", 00:13:52.173 "trtype": "tcp", 00:13:52.173 "traddr": "10.0.0.3", 00:13:52.173 "adrfam": "ipv4", 00:13:52.173 "trsvcid": "4420", 00:13:52.173 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:52.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:52.173 "prchk_reftag": false, 00:13:52.173 "prchk_guard": false, 00:13:52.173 "hdgst": false, 00:13:52.173 "ddgst": false, 00:13:52.173 "dhchap_key": "key1", 00:13:52.173 "dhchap_ctrlr_key": "ckey2", 00:13:52.173 "allow_unrecognized_csi": false, 00:13:52.173 "method": "bdev_nvme_attach_controller", 00:13:52.173 "req_id": 1 00:13:52.173 } 00:13:52.173 Got JSON-RPC error response 00:13:52.173 response: 00:13:52.173 { 00:13:52.173 "code": -5, 00:13:52.173 "message": "Input/output error" 00:13:52.173 } 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.173 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.740 request: 00:13:52.740 { 00:13:52.740 "name": "nvme0", 00:13:52.740 "trtype": "tcp", 00:13:52.740 "traddr": "10.0.0.3", 00:13:52.740 "adrfam": "ipv4", 00:13:52.740 "trsvcid": "4420", 00:13:52.740 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:52.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:52.740 "prchk_reftag": false, 00:13:52.740 "prchk_guard": false, 00:13:52.740 "hdgst": false, 00:13:52.740 "ddgst": false, 00:13:52.740 "dhchap_key": "key1", 00:13:52.740 "dhchap_ctrlr_key": "ckey1", 00:13:52.740 "allow_unrecognized_csi": false, 00:13:52.740 "method": "bdev_nvme_attach_controller", 00:13:52.740 "req_id": 1 00:13:52.740 } 00:13:52.740 Got JSON-RPC error response 00:13:52.740 response: 00:13:52.740 { 00:13:52.740 "code": -5, 00:13:52.740 "message": "Input/output error" 00:13:52.740 } 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67331 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 67331 ']' 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 67331 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67331 00:13:52.740 killing process with pid 67331 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:52.740 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67331' 00:13:52.741 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 67331 00:13:52.741 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 67331 00:13:52.999 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:52.999 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:52.999 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:52.999 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.999 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70491 00:13:52.999 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70491 00:13:52.999 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:52.999 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70491 ']' 00:13:52.999 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.999 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:52.999 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.999 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:52.999 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.257 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:53.257 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:53.257 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.257 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:53.257 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.515 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.515 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:53.515 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70491 00:13:53.515 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70491 ']' 00:13:53.515 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.515 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:53.515 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.515 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:53.515 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.774 null0 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.V9i 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.6DO ]] 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6DO 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.774 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.o62 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.B2q ]] 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.B2q 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.R5t 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.AWI ]] 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AWI 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.zqo 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.033 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.968 nvme0n1 00:13:54.968 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.968 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.968 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.226 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.226 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.226 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.226 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.226 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.226 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.226 { 00:13:55.226 "cntlid": 1, 00:13:55.226 "qid": 0, 00:13:55.226 "state": "enabled", 00:13:55.226 "thread": "nvmf_tgt_poll_group_000", 00:13:55.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:55.226 "listen_address": { 00:13:55.226 "trtype": "TCP", 00:13:55.226 "adrfam": "IPv4", 00:13:55.226 "traddr": "10.0.0.3", 00:13:55.226 "trsvcid": "4420" 00:13:55.226 }, 00:13:55.226 "peer_address": { 00:13:55.226 "trtype": "TCP", 00:13:55.226 "adrfam": "IPv4", 00:13:55.226 "traddr": "10.0.0.1", 00:13:55.226 "trsvcid": "58016" 00:13:55.226 }, 00:13:55.226 "auth": { 00:13:55.226 "state": "completed", 00:13:55.226 "digest": "sha512", 00:13:55.226 "dhgroup": "ffdhe8192" 00:13:55.226 } 00:13:55.226 } 00:13:55.226 ]' 00:13:55.226 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.484 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:55.484 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.484 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:55.484 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.484 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.484 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.484 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.741 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:55.741 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:13:56.676 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.676 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:56.676 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.676 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.676 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.676 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key3 00:13:56.676 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.676 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.676 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.676 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:56.676 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:56.934 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:56.935 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:56.935 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:56.935 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:56.935 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:56.935 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:56.935 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:56.935 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:56.935 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.935 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:57.193 request: 00:13:57.193 { 00:13:57.193 "name": "nvme0", 00:13:57.193 "trtype": "tcp", 00:13:57.193 "traddr": "10.0.0.3", 00:13:57.193 "adrfam": "ipv4", 00:13:57.193 "trsvcid": "4420", 00:13:57.193 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:57.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:57.193 "prchk_reftag": false, 00:13:57.193 "prchk_guard": false, 00:13:57.193 "hdgst": false, 00:13:57.193 "ddgst": false, 00:13:57.193 "dhchap_key": "key3", 00:13:57.193 "allow_unrecognized_csi": false, 00:13:57.193 "method": "bdev_nvme_attach_controller", 00:13:57.193 "req_id": 1 00:13:57.193 } 00:13:57.193 Got JSON-RPC error response 00:13:57.193 response: 00:13:57.193 { 00:13:57.193 "code": -5, 00:13:57.193 "message": "Input/output error" 00:13:57.193 } 00:13:57.193 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:57.193 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:57.193 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:57.193 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:57.193 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:57.193 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:57.193 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:57.193 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:57.451 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:57.451 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:57.451 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:57.451 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:57.451 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.451 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:57.451 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.451 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:57.451 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:57.451 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:57.709 request: 00:13:57.709 { 00:13:57.709 "name": "nvme0", 00:13:57.709 "trtype": "tcp", 00:13:57.709 "traddr": "10.0.0.3", 00:13:57.709 "adrfam": "ipv4", 00:13:57.709 "trsvcid": "4420", 00:13:57.709 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:57.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:57.709 "prchk_reftag": false, 00:13:57.709 "prchk_guard": false, 00:13:57.709 "hdgst": false, 00:13:57.709 "ddgst": false, 00:13:57.709 "dhchap_key": "key3", 00:13:57.709 "allow_unrecognized_csi": false, 00:13:57.709 "method": "bdev_nvme_attach_controller", 00:13:57.709 "req_id": 1 00:13:57.709 } 00:13:57.709 Got JSON-RPC error response 00:13:57.709 response: 00:13:57.709 { 00:13:57.709 "code": -5, 00:13:57.709 "message": "Input/output error" 00:13:57.709 } 00:13:57.709 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:57.709 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:57.709 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:57.709 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:57.709 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:57.709 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:57.709 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:57.709 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:57.709 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:57.710 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:58.276 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:58.535 request: 00:13:58.535 { 00:13:58.535 "name": "nvme0", 00:13:58.535 "trtype": "tcp", 00:13:58.535 "traddr": "10.0.0.3", 00:13:58.535 "adrfam": "ipv4", 00:13:58.535 "trsvcid": "4420", 00:13:58.535 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:58.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:13:58.535 "prchk_reftag": false, 00:13:58.535 "prchk_guard": false, 00:13:58.535 "hdgst": false, 00:13:58.535 "ddgst": false, 00:13:58.535 "dhchap_key": "key0", 00:13:58.535 "dhchap_ctrlr_key": "key1", 00:13:58.535 "allow_unrecognized_csi": false, 00:13:58.535 "method": "bdev_nvme_attach_controller", 00:13:58.535 "req_id": 1 00:13:58.535 } 00:13:58.535 Got JSON-RPC error response 00:13:58.535 response: 00:13:58.535 { 00:13:58.535 "code": -5, 00:13:58.535 "message": "Input/output error" 00:13:58.535 } 00:13:58.793 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:58.793 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:58.793 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:58.793 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:58.793 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:58.793 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:58.793 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:59.051 nvme0n1 00:13:59.051 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:59.051 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:59.051 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.313 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.313 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.313 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.897 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 00:13:59.897 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.897 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.897 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.897 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:59.897 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:59.897 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:00.833 nvme0n1 00:14:00.833 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:14:00.833 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:14:00.833 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.091 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.091 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:01.091 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.091 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.091 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.091 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:14:01.091 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.091 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:14:01.349 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.349 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:14:01.349 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid 50e4d619-cecf-4dd2-989d-1336dee31d8f -l 0 --dhchap-secret DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: --dhchap-ctrl-secret DHHC-1:03:NmE5NDE1N2ZjNjNmOWJkYjU0MzZkYWQzNDM1YTgzYjAzYjZlYWQ5ZDI0YTBlZmQ5MThlNTQ0MTIwY2QwOGNjM1BnHgQ=: 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:02.282 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:02.849 request: 00:14:02.849 { 00:14:02.849 "name": "nvme0", 00:14:02.849 "trtype": "tcp", 00:14:02.849 "traddr": "10.0.0.3", 00:14:02.849 "adrfam": "ipv4", 00:14:02.849 "trsvcid": "4420", 00:14:02.849 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:02.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f", 00:14:02.849 "prchk_reftag": false, 00:14:02.849 "prchk_guard": false, 00:14:02.849 "hdgst": false, 00:14:02.849 "ddgst": false, 00:14:02.850 "dhchap_key": "key1", 00:14:02.850 "allow_unrecognized_csi": false, 00:14:02.850 "method": "bdev_nvme_attach_controller", 00:14:02.850 "req_id": 1 00:14:02.850 } 00:14:02.850 Got JSON-RPC error response 00:14:02.850 response: 00:14:02.850 { 00:14:02.850 "code": -5, 00:14:02.850 "message": "Input/output error" 00:14:02.850 } 00:14:02.850 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:02.850 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.850 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.850 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.850 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:02.850 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:02.850 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:04.223 nvme0n1 00:14:04.223 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:14:04.223 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.223 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:14:04.223 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.223 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.223 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.481 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:14:04.481 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.481 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.481 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.481 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:14:04.481 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:04.481 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:05.045 nvme0n1 00:14:05.045 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:05.045 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:05.045 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.302 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.302 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.302 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.559 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: '' 2s 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: ]] 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmQ3MmE4ZWNkOWRjZGQ2Y2QzNzBkNjI0MzljZWVlOTb+U5Tb: 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:05.560 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:07.457 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:07.457 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:14:07.457 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:07.457 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: 2s 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: ]] 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTFlODdiODUyMGE5MjEwNmI3YmMyMDlhZGVhZmY2MDZmNzg4NjJmODBjMWU5OWEwyS0fWw==: 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:07.715 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:09.614 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:09.614 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:14:09.614 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:09.614 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:14:09.614 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:14:09.614 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:14:09.614 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:14:09.614 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.614 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:09.614 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.614 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.614 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.614 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:09.614 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:09.614 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:10.548 nvme0n1 00:14:10.806 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:10.806 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.806 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.806 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.806 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:10.806 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:11.381 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:11.381 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:11.381 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.639 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.639 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:14:11.639 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.639 10:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.639 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.639 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:11.639 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:11.897 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:11.897 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.897 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:12.154 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:13.087 request: 00:14:13.087 { 00:14:13.087 "name": "nvme0", 00:14:13.087 "dhchap_key": "key1", 00:14:13.087 "dhchap_ctrlr_key": "key3", 00:14:13.087 "method": "bdev_nvme_set_keys", 00:14:13.087 "req_id": 1 00:14:13.087 } 00:14:13.088 Got JSON-RPC error response 00:14:13.088 response: 00:14:13.088 { 00:14:13.088 "code": -13, 00:14:13.088 "message": "Permission denied" 00:14:13.088 } 00:14:13.088 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:13.088 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:13.088 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:13.088 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:13.088 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:13.088 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:13.088 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.088 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:13.088 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:14.460 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:14.460 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:14.460 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.460 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:14.460 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:14.460 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.460 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.460 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.460 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:14.460 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:14.460 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:15.440 nvme0n1 00:14:15.697 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:15.697 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.697 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.697 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.697 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:15.697 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:15.697 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:15.697 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:15.697 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.697 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:15.697 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.697 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:15.697 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:16.263 request: 00:14:16.263 { 00:14:16.263 "name": "nvme0", 00:14:16.263 "dhchap_key": "key2", 00:14:16.263 "dhchap_ctrlr_key": "key0", 00:14:16.263 "method": "bdev_nvme_set_keys", 00:14:16.263 "req_id": 1 00:14:16.263 } 00:14:16.263 Got JSON-RPC error response 00:14:16.263 response: 00:14:16.263 { 00:14:16.263 "code": -13, 00:14:16.263 "message": "Permission denied" 00:14:16.263 } 00:14:16.263 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:16.263 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.263 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:16.263 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.263 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:16.263 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.263 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:16.521 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:16.521 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:17.550 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:17.550 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:17.550 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67350 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 67350 ']' 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 67350 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67350 00:14:17.808 killing process with pid 67350 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67350' 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 67350 00:14:17.808 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 67350 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:18.375 rmmod nvme_tcp 00:14:18.375 rmmod nvme_fabrics 00:14:18.375 rmmod nvme_keyring 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70491 ']' 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70491 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 70491 ']' 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 70491 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70491 00:14:18.375 killing process with pid 70491 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70491' 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 70491 00:14:18.375 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 70491 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:18.632 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.V9i /tmp/spdk.key-sha256.o62 /tmp/spdk.key-sha384.R5t /tmp/spdk.key-sha512.zqo /tmp/spdk.key-sha512.6DO /tmp/spdk.key-sha384.B2q /tmp/spdk.key-sha256.AWI '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:18.890 00:14:18.890 real 3m20.724s 00:14:18.890 user 8m2.799s 00:14:18.890 sys 0m31.233s 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:18.890 ************************************ 00:14:18.890 END TEST nvmf_auth_target 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.890 ************************************ 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:18.890 ************************************ 00:14:18.890 START TEST nvmf_bdevio_no_huge 00:14:18.890 ************************************ 00:14:18.890 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:19.148 * Looking for test storage... 00:14:19.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:19.148 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.149 --rc genhtml_branch_coverage=1 00:14:19.149 --rc genhtml_function_coverage=1 00:14:19.149 --rc genhtml_legend=1 00:14:19.149 --rc geninfo_all_blocks=1 00:14:19.149 --rc geninfo_unexecuted_blocks=1 00:14:19.149 00:14:19.149 ' 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.149 --rc genhtml_branch_coverage=1 00:14:19.149 --rc genhtml_function_coverage=1 00:14:19.149 --rc genhtml_legend=1 00:14:19.149 --rc geninfo_all_blocks=1 00:14:19.149 --rc geninfo_unexecuted_blocks=1 00:14:19.149 00:14:19.149 ' 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.149 --rc genhtml_branch_coverage=1 00:14:19.149 --rc genhtml_function_coverage=1 00:14:19.149 --rc genhtml_legend=1 00:14:19.149 --rc geninfo_all_blocks=1 00:14:19.149 --rc geninfo_unexecuted_blocks=1 00:14:19.149 00:14:19.149 ' 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.149 --rc genhtml_branch_coverage=1 00:14:19.149 --rc genhtml_function_coverage=1 00:14:19.149 --rc genhtml_legend=1 00:14:19.149 --rc geninfo_all_blocks=1 00:14:19.149 --rc geninfo_unexecuted_blocks=1 00:14:19.149 00:14:19.149 ' 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:19.149 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.149 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:19.150 Cannot find device "nvmf_init_br" 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:19.150 Cannot find device "nvmf_init_br2" 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:19.150 Cannot find device "nvmf_tgt_br" 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:19.150 Cannot find device "nvmf_tgt_br2" 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:19.150 Cannot find device "nvmf_init_br" 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:19.150 Cannot find device "nvmf_init_br2" 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:19.150 Cannot find device "nvmf_tgt_br" 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:19.150 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:19.408 Cannot find device "nvmf_tgt_br2" 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:19.408 Cannot find device "nvmf_br" 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:19.408 Cannot find device "nvmf_init_if" 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:19.408 Cannot find device "nvmf_init_if2" 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:19.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:19.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:19.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:19.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:14:19.408 00:14:19.408 --- 10.0.0.3 ping statistics --- 00:14:19.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.408 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:19.408 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:19.408 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:14:19.408 00:14:19.408 --- 10.0.0.4 ping statistics --- 00:14:19.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.408 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:19.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:19.408 00:14:19.408 --- 10.0.0.1 ping statistics --- 00:14:19.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.408 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:19.408 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:19.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:14:19.666 00:14:19.666 --- 10.0.0.2 ping statistics --- 00:14:19.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.666 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:19.666 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.666 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71144 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71144 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 71144 ']' 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:19.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:19.667 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:19.667 [2024-11-15 10:37:45.002792] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:14:19.667 [2024-11-15 10:37:45.002913] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:19.925 [2024-11-15 10:37:45.174552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.925 [2024-11-15 10:37:45.262198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.925 [2024-11-15 10:37:45.262498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.925 [2024-11-15 10:37:45.262758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.925 [2024-11-15 10:37:45.262910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.925 [2024-11-15 10:37:45.263152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.925 [2024-11-15 10:37:45.264027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:19.925 [2024-11-15 10:37:45.264147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:19.925 [2024-11-15 10:37:45.264403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:19.925 [2024-11-15 10:37:45.264419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:19.925 [2024-11-15 10:37:45.270271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.183 [2024-11-15 10:37:45.478507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.183 Malloc0 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.183 [2024-11-15 10:37:45.525959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:20.183 { 00:14:20.183 "params": { 00:14:20.183 "name": "Nvme$subsystem", 00:14:20.183 "trtype": "$TEST_TRANSPORT", 00:14:20.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:20.183 "adrfam": "ipv4", 00:14:20.183 "trsvcid": "$NVMF_PORT", 00:14:20.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:20.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:20.183 "hdgst": ${hdgst:-false}, 00:14:20.183 "ddgst": ${ddgst:-false} 00:14:20.183 }, 00:14:20.183 "method": "bdev_nvme_attach_controller" 00:14:20.183 } 00:14:20.183 EOF 00:14:20.183 )") 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:20.183 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:20.183 "params": { 00:14:20.183 "name": "Nvme1", 00:14:20.183 "trtype": "tcp", 00:14:20.183 "traddr": "10.0.0.3", 00:14:20.183 "adrfam": "ipv4", 00:14:20.183 "trsvcid": "4420", 00:14:20.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.183 "hdgst": false, 00:14:20.183 "ddgst": false 00:14:20.183 }, 00:14:20.183 "method": "bdev_nvme_attach_controller" 00:14:20.183 }' 00:14:20.183 [2024-11-15 10:37:45.584989] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:14:20.183 [2024-11-15 10:37:45.585083] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71174 ] 00:14:20.441 [2024-11-15 10:37:45.746192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:20.441 [2024-11-15 10:37:45.827605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.441 [2024-11-15 10:37:45.827745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.441 [2024-11-15 10:37:45.827754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.441 [2024-11-15 10:37:45.842428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.700 I/O targets: 00:14:20.700 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:20.700 00:14:20.700 00:14:20.700 CUnit - A unit testing framework for C - Version 2.1-3 00:14:20.700 http://cunit.sourceforge.net/ 00:14:20.700 00:14:20.700 00:14:20.700 Suite: bdevio tests on: Nvme1n1 00:14:20.700 Test: blockdev write read block ...passed 00:14:20.700 Test: blockdev write zeroes read block ...passed 00:14:20.700 Test: blockdev write zeroes read no split ...passed 00:14:20.700 Test: blockdev write zeroes read split ...passed 00:14:20.700 Test: blockdev write zeroes read split partial ...passed 00:14:20.700 Test: blockdev reset ...[2024-11-15 10:37:46.098617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:20.700 [2024-11-15 10:37:46.098763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b0310 (9): Bad file descriptor 00:14:20.700 [2024-11-15 10:37:46.112007] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:20.700 passed 00:14:20.700 Test: blockdev write read 8 blocks ...passed 00:14:20.700 Test: blockdev write read size > 128k ...passed 00:14:20.700 Test: blockdev write read invalid size ...passed 00:14:20.700 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:20.700 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:20.700 Test: blockdev write read max offset ...passed 00:14:20.700 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:20.700 Test: blockdev writev readv 8 blocks ...passed 00:14:20.700 Test: blockdev writev readv 30 x 1block ...passed 00:14:20.700 Test: blockdev writev readv block ...passed 00:14:20.700 Test: blockdev writev readv size > 128k ...passed 00:14:20.700 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:20.700 Test: blockdev comparev and writev ...[2024-11-15 10:37:46.122049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.700 [2024-11-15 10:37:46.122100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:20.700 [2024-11-15 10:37:46.122121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.700 [2024-11-15 10:37:46.122132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:20.700 [2024-11-15 10:37:46.122523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.700 [2024-11-15 10:37:46.122555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:20.700 [2024-11-15 10:37:46.122575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.700 [2024-11-15 10:37:46.122585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:20.700 [2024-11-15 10:37:46.122968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.700 [2024-11-15 10:37:46.122999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:20.700 [2024-11-15 10:37:46.123017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.700 [2024-11-15 10:37:46.123028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:20.700 [2024-11-15 10:37:46.123537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.700 [2024-11-15 10:37:46.123568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:20.700 [2024-11-15 10:37:46.123586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.700 [2024-11-15 10:37:46.123596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:20.700 passed 00:14:20.700 Test: blockdev nvme passthru rw ...passed 00:14:20.700 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:37:46.124766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:20.700 [2024-11-15 10:37:46.124880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:20.700 [2024-11-15 10:37:46.125171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:20.700 [2024-11-15 10:37:46.125200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:20.700 [2024-11-15 10:37:46.125493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:20.700 [2024-11-15 10:37:46.125537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:20.700 [2024-11-15 10:37:46.125870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:20.700 [2024-11-15 10:37:46.125900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:20.700 passed 00:14:20.700 Test: blockdev nvme admin passthru ...passed 00:14:20.700 Test: blockdev copy ...passed 00:14:20.700 00:14:20.700 Run Summary: Type Total Ran Passed Failed Inactive 00:14:20.700 suites 1 1 n/a 0 0 00:14:20.700 tests 23 23 23 0 0 00:14:20.700 asserts 152 152 152 0 n/a 00:14:20.700 00:14:20.700 Elapsed time = 0.176 seconds 00:14:21.267 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.267 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.267 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:21.267 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.267 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:21.267 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:21.267 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:21.267 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:21.267 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:21.267 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:21.267 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:21.268 rmmod nvme_tcp 00:14:21.268 rmmod nvme_fabrics 00:14:21.268 rmmod nvme_keyring 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71144 ']' 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71144 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 71144 ']' 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 71144 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71144 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:14:21.268 killing process with pid 71144 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71144' 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 71144 00:14:21.268 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 71144 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:21.836 00:14:21.836 real 0m2.978s 00:14:21.836 user 0m8.355s 00:14:21.836 sys 0m1.415s 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:21.836 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:21.836 ************************************ 00:14:21.836 END TEST nvmf_bdevio_no_huge 00:14:21.836 ************************************ 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:22.095 ************************************ 00:14:22.095 START TEST nvmf_tls 00:14:22.095 ************************************ 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:22.095 * Looking for test storage... 00:14:22.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.095 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:22.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.096 --rc genhtml_branch_coverage=1 00:14:22.096 --rc genhtml_function_coverage=1 00:14:22.096 --rc genhtml_legend=1 00:14:22.096 --rc geninfo_all_blocks=1 00:14:22.096 --rc geninfo_unexecuted_blocks=1 00:14:22.096 00:14:22.096 ' 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:22.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.096 --rc genhtml_branch_coverage=1 00:14:22.096 --rc genhtml_function_coverage=1 00:14:22.096 --rc genhtml_legend=1 00:14:22.096 --rc geninfo_all_blocks=1 00:14:22.096 --rc geninfo_unexecuted_blocks=1 00:14:22.096 00:14:22.096 ' 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:22.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.096 --rc genhtml_branch_coverage=1 00:14:22.096 --rc genhtml_function_coverage=1 00:14:22.096 --rc genhtml_legend=1 00:14:22.096 --rc geninfo_all_blocks=1 00:14:22.096 --rc geninfo_unexecuted_blocks=1 00:14:22.096 00:14:22.096 ' 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:22.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.096 --rc genhtml_branch_coverage=1 00:14:22.096 --rc genhtml_function_coverage=1 00:14:22.096 --rc genhtml_legend=1 00:14:22.096 --rc geninfo_all_blocks=1 00:14:22.096 --rc geninfo_unexecuted_blocks=1 00:14:22.096 00:14:22.096 ' 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:22.096 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:22.096 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:22.097 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:22.097 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:22.097 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:22.097 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:22.097 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.097 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:22.097 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:22.097 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:22.097 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:22.097 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:22.097 Cannot find device "nvmf_init_br" 00:14:22.097 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:22.097 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:22.420 Cannot find device "nvmf_init_br2" 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:22.420 Cannot find device "nvmf_tgt_br" 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:22.420 Cannot find device "nvmf_tgt_br2" 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:22.420 Cannot find device "nvmf_init_br" 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:22.420 Cannot find device "nvmf_init_br2" 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:22.420 Cannot find device "nvmf_tgt_br" 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:22.420 Cannot find device "nvmf_tgt_br2" 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:22.420 Cannot find device "nvmf_br" 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:22.420 Cannot find device "nvmf_init_if" 00:14:22.420 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:22.421 Cannot find device "nvmf_init_if2" 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:22.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:22.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:22.421 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:22.704 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:22.704 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:14:22.704 00:14:22.704 --- 10.0.0.3 ping statistics --- 00:14:22.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.704 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:22.704 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:22.704 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:14:22.704 00:14:22.704 --- 10.0.0.4 ping statistics --- 00:14:22.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.704 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:22.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:22.704 00:14:22.704 --- 10.0.0.1 ping statistics --- 00:14:22.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.704 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:22.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:14:22.704 00:14:22.704 --- 10.0.0.2 ping statistics --- 00:14:22.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.704 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71406 00:14:22.704 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71406 00:14:22.705 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:22.705 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71406 ']' 00:14:22.705 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.705 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:22.705 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.705 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:22.705 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.705 [2024-11-15 10:37:48.023630] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:14:22.705 [2024-11-15 10:37:48.023728] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.705 [2024-11-15 10:37:48.174653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.963 [2024-11-15 10:37:48.235742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.963 [2024-11-15 10:37:48.235796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.963 [2024-11-15 10:37:48.235808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.963 [2024-11-15 10:37:48.235816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.963 [2024-11-15 10:37:48.235824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.963 [2024-11-15 10:37:48.236252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.963 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:22.963 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:22.963 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:22.963 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:22.963 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.963 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.963 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:22.963 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:23.221 true 00:14:23.221 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:23.221 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:23.479 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:23.479 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:23.479 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:24.045 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:24.045 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:24.303 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:24.303 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:24.303 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:24.561 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:24.561 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:24.818 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:24.818 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:24.818 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:24.818 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:25.076 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:25.076 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:25.076 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:25.334 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:25.334 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:25.591 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:25.591 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:25.591 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:25.850 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:25.850 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.7mAnlt4fbb 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.DQdXqdA7oQ 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.7mAnlt4fbb 00:14:26.110 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.DQdXqdA7oQ 00:14:26.371 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:26.629 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:26.887 [2024-11-15 10:37:52.199460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:26.887 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.7mAnlt4fbb 00:14:26.887 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.7mAnlt4fbb 00:14:26.887 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:27.145 [2024-11-15 10:37:52.493656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.145 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:27.403 10:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:27.662 [2024-11-15 10:37:53.041779] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:27.662 [2024-11-15 10:37:53.042025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:27.662 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:27.920 malloc0 00:14:27.920 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:28.177 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.7mAnlt4fbb 00:14:28.435 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:28.693 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.7mAnlt4fbb 00:14:40.932 Initializing NVMe Controllers 00:14:40.932 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.932 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:40.932 Initialization complete. Launching workers. 00:14:40.932 ======================================================== 00:14:40.932 Latency(us) 00:14:40.932 Device Information : IOPS MiB/s Average min max 00:14:40.932 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9435.79 36.86 6784.38 1266.65 8748.28 00:14:40.932 ======================================================== 00:14:40.932 Total : 9435.79 36.86 6784.38 1266.65 8748.28 00:14:40.932 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7mAnlt4fbb 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7mAnlt4fbb 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71642 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71642 /var/tmp/bdevperf.sock 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71642 ']' 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:40.932 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.932 [2024-11-15 10:38:04.430310] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:14:40.932 [2024-11-15 10:38:04.430588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71642 ] 00:14:40.932 [2024-11-15 10:38:04.580235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.933 [2024-11-15 10:38:04.650699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.933 [2024-11-15 10:38:04.708899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:40.933 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:40.933 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:40.933 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7mAnlt4fbb 00:14:40.933 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:40.933 [2024-11-15 10:38:05.346374] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:40.933 TLSTESTn1 00:14:40.933 10:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:40.933 Running I/O for 10 seconds... 00:14:42.125 4077.00 IOPS, 15.93 MiB/s [2024-11-15T10:38:08.558Z] 4113.50 IOPS, 16.07 MiB/s [2024-11-15T10:38:09.934Z] 4122.33 IOPS, 16.10 MiB/s [2024-11-15T10:38:10.870Z] 4119.50 IOPS, 16.09 MiB/s [2024-11-15T10:38:11.805Z] 4112.80 IOPS, 16.07 MiB/s [2024-11-15T10:38:12.738Z] 4119.00 IOPS, 16.09 MiB/s [2024-11-15T10:38:13.671Z] 4126.29 IOPS, 16.12 MiB/s [2024-11-15T10:38:14.605Z] 4227.88 IOPS, 16.52 MiB/s [2024-11-15T10:38:15.982Z] 4303.00 IOPS, 16.81 MiB/s [2024-11-15T10:38:15.982Z] 4359.50 IOPS, 17.03 MiB/s 00:14:50.484 Latency(us) 00:14:50.484 [2024-11-15T10:38:15.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.484 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:50.484 Verification LBA range: start 0x0 length 0x2000 00:14:50.484 TLSTESTn1 : 10.01 4366.12 17.06 0.00 0.00 29266.35 4289.63 24546.21 00:14:50.484 [2024-11-15T10:38:15.982Z] =================================================================================================================== 00:14:50.484 [2024-11-15T10:38:15.982Z] Total : 4366.12 17.06 0.00 0.00 29266.35 4289.63 24546.21 00:14:50.484 { 00:14:50.484 "results": [ 00:14:50.484 { 00:14:50.484 "job": "TLSTESTn1", 00:14:50.484 "core_mask": "0x4", 00:14:50.484 "workload": "verify", 00:14:50.484 "status": "finished", 00:14:50.484 "verify_range": { 00:14:50.484 "start": 0, 00:14:50.484 "length": 8192 00:14:50.484 }, 00:14:50.484 "queue_depth": 128, 00:14:50.484 "io_size": 4096, 00:14:50.484 "runtime": 10.0137, 00:14:50.484 "iops": 4366.118417767659, 00:14:50.484 "mibps": 17.055150069404917, 00:14:50.484 "io_failed": 0, 00:14:50.484 "io_timeout": 0, 00:14:50.484 "avg_latency_us": 29266.35405943888, 00:14:50.485 "min_latency_us": 4289.629090909091, 00:14:50.485 "max_latency_us": 24546.21090909091 00:14:50.485 } 00:14:50.485 ], 00:14:50.485 "core_count": 1 00:14:50.485 } 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71642 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71642 ']' 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71642 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71642 00:14:50.485 killing process with pid 71642 00:14:50.485 Received shutdown signal, test time was about 10.000000 seconds 00:14:50.485 00:14:50.485 Latency(us) 00:14:50.485 [2024-11-15T10:38:15.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.485 [2024-11-15T10:38:15.983Z] =================================================================================================================== 00:14:50.485 [2024-11-15T10:38:15.983Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71642' 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71642 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71642 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DQdXqdA7oQ 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DQdXqdA7oQ 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DQdXqdA7oQ 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DQdXqdA7oQ 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71769 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:50.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71769 /var/tmp/bdevperf.sock 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71769 ']' 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:50.485 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.485 [2024-11-15 10:38:15.874478] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:14:50.485 [2024-11-15 10:38:15.874617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71769 ] 00:14:50.743 [2024-11-15 10:38:16.019492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.743 [2024-11-15 10:38:16.081521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.743 [2024-11-15 10:38:16.138360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.743 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:50.743 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:50.743 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DQdXqdA7oQ 00:14:51.310 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:51.310 [2024-11-15 10:38:16.802621] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:51.568 [2024-11-15 10:38:16.808253] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:51.568 [2024-11-15 10:38:16.808933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2166fb0 (107): Transport endpoint is not connected 00:14:51.568 [2024-11-15 10:38:16.809852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2166fb0 (9): Bad file descriptor 00:14:51.568 [2024-11-15 10:38:16.810848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:51.568 [2024-11-15 10:38:16.810879] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:51.568 [2024-11-15 10:38:16.810893] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:51.568 [2024-11-15 10:38:16.810910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:51.568 request: 00:14:51.568 { 00:14:51.568 "name": "TLSTEST", 00:14:51.568 "trtype": "tcp", 00:14:51.568 "traddr": "10.0.0.3", 00:14:51.568 "adrfam": "ipv4", 00:14:51.568 "trsvcid": "4420", 00:14:51.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:51.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:51.568 "prchk_reftag": false, 00:14:51.568 "prchk_guard": false, 00:14:51.568 "hdgst": false, 00:14:51.568 "ddgst": false, 00:14:51.568 "psk": "key0", 00:14:51.568 "allow_unrecognized_csi": false, 00:14:51.568 "method": "bdev_nvme_attach_controller", 00:14:51.568 "req_id": 1 00:14:51.568 } 00:14:51.568 Got JSON-RPC error response 00:14:51.568 response: 00:14:51.568 { 00:14:51.568 "code": -5, 00:14:51.568 "message": "Input/output error" 00:14:51.568 } 00:14:51.568 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71769 00:14:51.568 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71769 ']' 00:14:51.568 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71769 00:14:51.568 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:51.568 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:51.568 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71769 00:14:51.568 killing process with pid 71769 00:14:51.568 Received shutdown signal, test time was about 10.000000 seconds 00:14:51.568 00:14:51.568 Latency(us) 00:14:51.568 [2024-11-15T10:38:17.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.569 [2024-11-15T10:38:17.067Z] =================================================================================================================== 00:14:51.569 [2024-11-15T10:38:17.067Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:51.569 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:51.569 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:51.569 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71769' 00:14:51.569 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71769 00:14:51.569 10:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71769 00:14:51.569 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:51.569 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:51.569 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:51.569 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:51.569 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:51.569 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7mAnlt4fbb 00:14:51.569 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:51.569 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7mAnlt4fbb 00:14:51.569 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:51.569 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.569 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7mAnlt4fbb 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7mAnlt4fbb 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71791 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71791 /var/tmp/bdevperf.sock 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71791 ']' 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:51.828 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:51.828 [2024-11-15 10:38:17.113662] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:14:51.828 [2024-11-15 10:38:17.113746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71791 ] 00:14:51.828 [2024-11-15 10:38:17.255086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.828 [2024-11-15 10:38:17.319493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.086 [2024-11-15 10:38:17.373625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.019 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:53.019 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:53.019 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7mAnlt4fbb 00:14:53.019 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:53.585 [2024-11-15 10:38:18.773882] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:53.585 [2024-11-15 10:38:18.787198] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:53.585 [2024-11-15 10:38:18.787242] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:53.585 [2024-11-15 10:38:18.787294] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:53.585 [2024-11-15 10:38:18.787376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x546fb0 (107): Transport endpoint is not connected 00:14:53.585 [2024-11-15 10:38:18.788363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x546fb0 (9): Bad file descriptor 00:14:53.585 [2024-11-15 10:38:18.789362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:53.585 [2024-11-15 10:38:18.789405] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:53.585 [2024-11-15 10:38:18.789424] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:53.585 [2024-11-15 10:38:18.789448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:53.585 request: 00:14:53.585 { 00:14:53.585 "name": "TLSTEST", 00:14:53.585 "trtype": "tcp", 00:14:53.585 "traddr": "10.0.0.3", 00:14:53.585 "adrfam": "ipv4", 00:14:53.585 "trsvcid": "4420", 00:14:53.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.585 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:53.585 "prchk_reftag": false, 00:14:53.585 "prchk_guard": false, 00:14:53.585 "hdgst": false, 00:14:53.585 "ddgst": false, 00:14:53.585 "psk": "key0", 00:14:53.585 "allow_unrecognized_csi": false, 00:14:53.585 "method": "bdev_nvme_attach_controller", 00:14:53.585 "req_id": 1 00:14:53.585 } 00:14:53.585 Got JSON-RPC error response 00:14:53.585 response: 00:14:53.585 { 00:14:53.585 "code": -5, 00:14:53.585 "message": "Input/output error" 00:14:53.585 } 00:14:53.585 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71791 00:14:53.585 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71791 ']' 00:14:53.585 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71791 00:14:53.585 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:53.585 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:53.585 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71791 00:14:53.585 killing process with pid 71791 00:14:53.585 Received shutdown signal, test time was about 10.000000 seconds 00:14:53.585 00:14:53.585 Latency(us) 00:14:53.585 [2024-11-15T10:38:19.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.585 [2024-11-15T10:38:19.083Z] =================================================================================================================== 00:14:53.585 [2024-11-15T10:38:19.083Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:53.585 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:53.585 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:53.585 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71791' 00:14:53.585 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71791 00:14:53.585 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71791 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7mAnlt4fbb 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7mAnlt4fbb 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7mAnlt4fbb 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7mAnlt4fbb 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71821 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71821 /var/tmp/bdevperf.sock 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71821 ']' 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:53.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:53.585 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.882 [2024-11-15 10:38:19.101138] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:14:53.882 [2024-11-15 10:38:19.101224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71821 ] 00:14:53.882 [2024-11-15 10:38:19.249535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.882 [2024-11-15 10:38:19.318556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.154 [2024-11-15 10:38:19.380642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:54.154 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:54.154 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:54.154 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7mAnlt4fbb 00:14:54.413 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:54.671 [2024-11-15 10:38:20.004917] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:54.671 [2024-11-15 10:38:20.017321] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:54.671 [2024-11-15 10:38:20.017365] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:54.671 [2024-11-15 10:38:20.017417] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:54.672 [2024-11-15 10:38:20.017876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1022fb0 (107): Transport endpoint is not connected 00:14:54.672 [2024-11-15 10:38:20.018868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1022fb0 (9): Bad file descriptor 00:14:54.672 [2024-11-15 10:38:20.019865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:54.672 [2024-11-15 10:38:20.019895] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:54.672 [2024-11-15 10:38:20.019907] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:54.672 [2024-11-15 10:38:20.019924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:54.672 request: 00:14:54.672 { 00:14:54.672 "name": "TLSTEST", 00:14:54.672 "trtype": "tcp", 00:14:54.672 "traddr": "10.0.0.3", 00:14:54.672 "adrfam": "ipv4", 00:14:54.672 "trsvcid": "4420", 00:14:54.672 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:54.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.672 "prchk_reftag": false, 00:14:54.672 "prchk_guard": false, 00:14:54.672 "hdgst": false, 00:14:54.672 "ddgst": false, 00:14:54.672 "psk": "key0", 00:14:54.672 "allow_unrecognized_csi": false, 00:14:54.672 "method": "bdev_nvme_attach_controller", 00:14:54.672 "req_id": 1 00:14:54.672 } 00:14:54.672 Got JSON-RPC error response 00:14:54.672 response: 00:14:54.672 { 00:14:54.672 "code": -5, 00:14:54.672 "message": "Input/output error" 00:14:54.672 } 00:14:54.672 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71821 00:14:54.672 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71821 ']' 00:14:54.672 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71821 00:14:54.672 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:54.672 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:54.672 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71821 00:14:54.672 killing process with pid 71821 00:14:54.672 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.672 00:14:54.672 Latency(us) 00:14:54.672 [2024-11-15T10:38:20.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.672 [2024-11-15T10:38:20.170Z] =================================================================================================================== 00:14:54.672 [2024-11-15T10:38:20.170Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:54.672 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:54.672 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:54.672 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71821' 00:14:54.672 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71821 00:14:54.672 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71821 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71842 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71842 /var/tmp/bdevperf.sock 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71842 ']' 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:54.930 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.930 [2024-11-15 10:38:20.327389] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:14:54.930 [2024-11-15 10:38:20.327696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71842 ] 00:14:55.188 [2024-11-15 10:38:20.477063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.189 [2024-11-15 10:38:20.538738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.189 [2024-11-15 10:38:20.592002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:56.122 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:56.122 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:56.122 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:56.379 [2024-11-15 10:38:21.637894] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:56.379 [2024-11-15 10:38:21.638307] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:56.379 request: 00:14:56.379 { 00:14:56.379 "name": "key0", 00:14:56.379 "path": "", 00:14:56.379 "method": "keyring_file_add_key", 00:14:56.379 "req_id": 1 00:14:56.379 } 00:14:56.379 Got JSON-RPC error response 00:14:56.379 response: 00:14:56.379 { 00:14:56.379 "code": -1, 00:14:56.379 "message": "Operation not permitted" 00:14:56.379 } 00:14:56.379 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:56.637 [2024-11-15 10:38:21.926097] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:56.637 [2024-11-15 10:38:21.926740] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:56.637 request: 00:14:56.637 { 00:14:56.637 "name": "TLSTEST", 00:14:56.637 "trtype": "tcp", 00:14:56.637 "traddr": "10.0.0.3", 00:14:56.637 "adrfam": "ipv4", 00:14:56.637 "trsvcid": "4420", 00:14:56.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.638 "prchk_reftag": false, 00:14:56.638 "prchk_guard": false, 00:14:56.638 "hdgst": false, 00:14:56.638 "ddgst": false, 00:14:56.638 "psk": "key0", 00:14:56.638 "allow_unrecognized_csi": false, 00:14:56.638 "method": "bdev_nvme_attach_controller", 00:14:56.638 "req_id": 1 00:14:56.638 } 00:14:56.638 Got JSON-RPC error response 00:14:56.638 response: 00:14:56.638 { 00:14:56.638 "code": -126, 00:14:56.638 "message": "Required key not available" 00:14:56.638 } 00:14:56.638 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71842 00:14:56.638 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71842 ']' 00:14:56.638 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71842 00:14:56.638 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:56.638 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:56.638 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71842 00:14:56.638 killing process with pid 71842 00:14:56.638 Received shutdown signal, test time was about 10.000000 seconds 00:14:56.638 00:14:56.638 Latency(us) 00:14:56.638 [2024-11-15T10:38:22.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.638 [2024-11-15T10:38:22.136Z] =================================================================================================================== 00:14:56.638 [2024-11-15T10:38:22.136Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:56.638 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:56.638 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:56.638 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71842' 00:14:56.638 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71842 00:14:56.638 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71842 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71406 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71406 ']' 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71406 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71406 00:14:56.896 killing process with pid 71406 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71406' 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71406 00:14:56.896 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71406 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.4e8bETFlTP 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.4e8bETFlTP 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71892 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71892 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71892 ']' 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:57.155 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.155 [2024-11-15 10:38:22.536393] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:14:57.155 [2024-11-15 10:38:22.536493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.413 [2024-11-15 10:38:22.690181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.413 [2024-11-15 10:38:22.757144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.413 [2024-11-15 10:38:22.757205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.413 [2024-11-15 10:38:22.757229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.413 [2024-11-15 10:38:22.757247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.413 [2024-11-15 10:38:22.757256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.413 [2024-11-15 10:38:22.757770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.413 [2024-11-15 10:38:22.815464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:58.059 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:58.059 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:58.059 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:58.059 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:58.059 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.317 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.317 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.4e8bETFlTP 00:14:58.317 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4e8bETFlTP 00:14:58.317 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:58.575 [2024-11-15 10:38:23.833174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.575 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:58.833 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:59.091 [2024-11-15 10:38:24.345305] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:59.091 [2024-11-15 10:38:24.345593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:59.091 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:59.355 malloc0 00:14:59.355 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:59.614 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4e8bETFlTP 00:14:59.872 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4e8bETFlTP 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4e8bETFlTP 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71953 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71953 /var/tmp/bdevperf.sock 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71953 ']' 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:00.130 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.130 [2024-11-15 10:38:25.458308] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:00.131 [2024-11-15 10:38:25.458539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71953 ] 00:15:00.131 [2024-11-15 10:38:25.603706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.389 [2024-11-15 10:38:25.666895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.389 [2024-11-15 10:38:25.722292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:00.389 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:00.389 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:00.389 10:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4e8bETFlTP 00:15:00.647 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:00.905 [2024-11-15 10:38:26.287726] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:00.905 TLSTESTn1 00:15:00.905 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:01.163 Running I/O for 10 seconds... 00:15:03.033 4005.00 IOPS, 15.64 MiB/s [2024-11-15T10:38:29.906Z] 4046.50 IOPS, 15.81 MiB/s [2024-11-15T10:38:30.855Z] 4070.33 IOPS, 15.90 MiB/s [2024-11-15T10:38:31.864Z] 4088.00 IOPS, 15.97 MiB/s [2024-11-15T10:38:32.797Z] 4093.40 IOPS, 15.99 MiB/s [2024-11-15T10:38:33.731Z] 4070.67 IOPS, 15.90 MiB/s [2024-11-15T10:38:34.666Z] 4081.00 IOPS, 15.94 MiB/s [2024-11-15T10:38:35.606Z] 4082.75 IOPS, 15.95 MiB/s [2024-11-15T10:38:36.541Z] 4083.89 IOPS, 15.95 MiB/s [2024-11-15T10:38:36.800Z] 4083.00 IOPS, 15.95 MiB/s 00:15:11.302 Latency(us) 00:15:11.302 [2024-11-15T10:38:36.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.302 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:11.302 Verification LBA range: start 0x0 length 0x2000 00:15:11.302 TLSTESTn1 : 10.02 4088.74 15.97 0.00 0.00 31247.70 5928.03 35031.97 00:15:11.302 [2024-11-15T10:38:36.800Z] =================================================================================================================== 00:15:11.302 [2024-11-15T10:38:36.800Z] Total : 4088.74 15.97 0.00 0.00 31247.70 5928.03 35031.97 00:15:11.302 { 00:15:11.302 "results": [ 00:15:11.302 { 00:15:11.302 "job": "TLSTESTn1", 00:15:11.302 "core_mask": "0x4", 00:15:11.302 "workload": "verify", 00:15:11.302 "status": "finished", 00:15:11.302 "verify_range": { 00:15:11.302 "start": 0, 00:15:11.302 "length": 8192 00:15:11.302 }, 00:15:11.302 "queue_depth": 128, 00:15:11.302 "io_size": 4096, 00:15:11.302 "runtime": 10.016522, 00:15:11.302 "iops": 4088.744576211184, 00:15:11.302 "mibps": 15.971658500824937, 00:15:11.302 "io_failed": 0, 00:15:11.302 "io_timeout": 0, 00:15:11.302 "avg_latency_us": 31247.697688949065, 00:15:11.302 "min_latency_us": 5928.029090909091, 00:15:11.302 "max_latency_us": 35031.97090909091 00:15:11.302 } 00:15:11.302 ], 00:15:11.302 "core_count": 1 00:15:11.302 } 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71953 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71953 ']' 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71953 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71953 00:15:11.302 killing process with pid 71953 00:15:11.302 Received shutdown signal, test time was about 10.000000 seconds 00:15:11.302 00:15:11.302 Latency(us) 00:15:11.302 [2024-11-15T10:38:36.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.302 [2024-11-15T10:38:36.800Z] =================================================================================================================== 00:15:11.302 [2024-11-15T10:38:36.800Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71953' 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71953 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71953 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.4e8bETFlTP 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4e8bETFlTP 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4e8bETFlTP 00:15:11.302 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4e8bETFlTP 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4e8bETFlTP 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72081 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72081 /var/tmp/bdevperf.sock 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72081 ']' 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:11.561 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.561 [2024-11-15 10:38:36.848327] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:11.561 [2024-11-15 10:38:36.848840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72081 ] 00:15:11.561 [2024-11-15 10:38:36.992341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.561 [2024-11-15 10:38:37.053722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.819 [2024-11-15 10:38:37.110559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:11.819 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:11.820 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:11.820 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4e8bETFlTP 00:15:12.078 [2024-11-15 10:38:37.466350] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4e8bETFlTP': 0100666 00:15:12.078 [2024-11-15 10:38:37.466406] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:12.078 request: 00:15:12.078 { 00:15:12.078 "name": "key0", 00:15:12.078 "path": "/tmp/tmp.4e8bETFlTP", 00:15:12.078 "method": "keyring_file_add_key", 00:15:12.078 "req_id": 1 00:15:12.078 } 00:15:12.078 Got JSON-RPC error response 00:15:12.078 response: 00:15:12.078 { 00:15:12.078 "code": -1, 00:15:12.078 "message": "Operation not permitted" 00:15:12.078 } 00:15:12.078 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:12.337 [2024-11-15 10:38:37.802833] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:12.337 [2024-11-15 10:38:37.802924] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:12.337 request: 00:15:12.337 { 00:15:12.337 "name": "TLSTEST", 00:15:12.337 "trtype": "tcp", 00:15:12.337 "traddr": "10.0.0.3", 00:15:12.337 "adrfam": "ipv4", 00:15:12.337 "trsvcid": "4420", 00:15:12.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:12.337 "prchk_reftag": false, 00:15:12.337 "prchk_guard": false, 00:15:12.337 "hdgst": false, 00:15:12.337 "ddgst": false, 00:15:12.337 "psk": "key0", 00:15:12.337 "allow_unrecognized_csi": false, 00:15:12.337 "method": "bdev_nvme_attach_controller", 00:15:12.337 "req_id": 1 00:15:12.337 } 00:15:12.337 Got JSON-RPC error response 00:15:12.337 response: 00:15:12.337 { 00:15:12.337 "code": -126, 00:15:12.337 "message": "Required key not available" 00:15:12.337 } 00:15:12.337 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72081 00:15:12.337 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72081 ']' 00:15:12.337 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72081 00:15:12.337 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:12.337 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:12.337 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72081 00:15:12.595 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:15:12.595 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:15:12.595 killing process with pid 72081 00:15:12.595 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.595 00:15:12.595 Latency(us) 00:15:12.595 [2024-11-15T10:38:38.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.595 [2024-11-15T10:38:38.093Z] =================================================================================================================== 00:15:12.595 [2024-11-15T10:38:38.093Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:12.595 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72081' 00:15:12.595 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72081 00:15:12.595 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72081 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71892 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71892 ']' 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71892 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71892 00:15:12.595 killing process with pid 71892 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71892' 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71892 00:15:12.595 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71892 00:15:12.853 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:12.853 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:12.853 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:12.853 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.853 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72107 00:15:12.853 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:12.853 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72107 00:15:12.853 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72107 ']' 00:15:12.853 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.853 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:12.853 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.853 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:12.853 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.853 [2024-11-15 10:38:38.329385] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:12.853 [2024-11-15 10:38:38.329877] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.112 [2024-11-15 10:38:38.476347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.112 [2024-11-15 10:38:38.534496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.112 [2024-11-15 10:38:38.534750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.112 [2024-11-15 10:38:38.534875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.112 [2024-11-15 10:38:38.535005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.112 [2024-11-15 10:38:38.535040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.112 [2024-11-15 10:38:38.535577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.112 [2024-11-15 10:38:38.589658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:13.370 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:13.370 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:13.370 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:13.370 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:13.370 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.371 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.371 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.4e8bETFlTP 00:15:13.371 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:13.371 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.4e8bETFlTP 00:15:13.371 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:15:13.371 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.371 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:15:13.371 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.371 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.4e8bETFlTP 00:15:13.371 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4e8bETFlTP 00:15:13.371 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:13.628 [2024-11-15 10:38:38.979573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.628 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:13.885 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:14.190 [2024-11-15 10:38:39.487680] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:14.190 [2024-11-15 10:38:39.487929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:14.190 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:14.451 malloc0 00:15:14.451 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:14.711 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4e8bETFlTP 00:15:14.967 [2024-11-15 10:38:40.423008] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4e8bETFlTP': 0100666 00:15:14.967 [2024-11-15 10:38:40.423064] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:14.967 request: 00:15:14.967 { 00:15:14.967 "name": "key0", 00:15:14.967 "path": "/tmp/tmp.4e8bETFlTP", 00:15:14.967 "method": "keyring_file_add_key", 00:15:14.967 "req_id": 1 00:15:14.967 } 00:15:14.967 Got JSON-RPC error response 00:15:14.967 response: 00:15:14.967 { 00:15:14.967 "code": -1, 00:15:14.967 "message": "Operation not permitted" 00:15:14.967 } 00:15:14.967 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:15.226 [2024-11-15 10:38:40.683125] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:15.226 [2024-11-15 10:38:40.683221] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:15.226 request: 00:15:15.226 { 00:15:15.226 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.226 "host": "nqn.2016-06.io.spdk:host1", 00:15:15.226 "psk": "key0", 00:15:15.226 "method": "nvmf_subsystem_add_host", 00:15:15.226 "req_id": 1 00:15:15.226 } 00:15:15.226 Got JSON-RPC error response 00:15:15.226 response: 00:15:15.226 { 00:15:15.226 "code": -32603, 00:15:15.226 "message": "Internal error" 00:15:15.226 } 00:15:15.226 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:15.226 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:15.226 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:15.226 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:15.226 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72107 00:15:15.226 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72107 ']' 00:15:15.226 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72107 00:15:15.226 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:15.226 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:15.226 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72107 00:15:15.486 killing process with pid 72107 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72107' 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72107 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72107 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.4e8bETFlTP 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72169 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72169 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72169 ']' 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:15.486 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.744 [2024-11-15 10:38:41.024366] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:15.744 [2024-11-15 10:38:41.024452] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.744 [2024-11-15 10:38:41.169769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.744 [2024-11-15 10:38:41.230295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.744 [2024-11-15 10:38:41.230361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.744 [2024-11-15 10:38:41.230374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.744 [2024-11-15 10:38:41.230383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.744 [2024-11-15 10:38:41.230390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.744 [2024-11-15 10:38:41.230819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.002 [2024-11-15 10:38:41.286527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:16.002 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:16.002 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:16.002 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:16.002 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:16.002 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.002 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.002 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.4e8bETFlTP 00:15:16.002 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4e8bETFlTP 00:15:16.002 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:16.259 [2024-11-15 10:38:41.650087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.259 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:16.517 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:16.776 [2024-11-15 10:38:42.158164] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:16.776 [2024-11-15 10:38:42.158407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:16.776 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:17.034 malloc0 00:15:17.034 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:17.292 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4e8bETFlTP 00:15:17.551 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:17.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:17.810 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72218 00:15:17.810 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:17.810 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:17.810 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72218 /var/tmp/bdevperf.sock 00:15:17.810 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72218 ']' 00:15:17.810 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:17.810 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:17.810 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:17.810 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:17.810 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:18.068 [2024-11-15 10:38:43.349047] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:18.068 [2024-11-15 10:38:43.349343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72218 ] 00:15:18.068 [2024-11-15 10:38:43.507999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.329 [2024-11-15 10:38:43.581254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.329 [2024-11-15 10:38:43.641765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:18.897 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:18.897 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:18.897 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4e8bETFlTP 00:15:19.155 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:19.412 [2024-11-15 10:38:44.847053] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:19.670 TLSTESTn1 00:15:19.670 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:19.928 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:19.928 "subsystems": [ 00:15:19.928 { 00:15:19.928 "subsystem": "keyring", 00:15:19.928 "config": [ 00:15:19.928 { 00:15:19.928 "method": "keyring_file_add_key", 00:15:19.928 "params": { 00:15:19.928 "name": "key0", 00:15:19.928 "path": "/tmp/tmp.4e8bETFlTP" 00:15:19.928 } 00:15:19.928 } 00:15:19.928 ] 00:15:19.928 }, 00:15:19.928 { 00:15:19.928 "subsystem": "iobuf", 00:15:19.928 "config": [ 00:15:19.928 { 00:15:19.928 "method": "iobuf_set_options", 00:15:19.928 "params": { 00:15:19.928 "small_pool_count": 8192, 00:15:19.928 "large_pool_count": 1024, 00:15:19.928 "small_bufsize": 8192, 00:15:19.928 "large_bufsize": 135168, 00:15:19.928 "enable_numa": false 00:15:19.928 } 00:15:19.928 } 00:15:19.928 ] 00:15:19.928 }, 00:15:19.928 { 00:15:19.928 "subsystem": "sock", 00:15:19.928 "config": [ 00:15:19.928 { 00:15:19.928 "method": "sock_set_default_impl", 00:15:19.928 "params": { 00:15:19.928 "impl_name": "uring" 00:15:19.928 } 00:15:19.928 }, 00:15:19.928 { 00:15:19.928 "method": "sock_impl_set_options", 00:15:19.928 "params": { 00:15:19.928 "impl_name": "ssl", 00:15:19.928 "recv_buf_size": 4096, 00:15:19.928 "send_buf_size": 4096, 00:15:19.928 "enable_recv_pipe": true, 00:15:19.928 "enable_quickack": false, 00:15:19.928 "enable_placement_id": 0, 00:15:19.928 "enable_zerocopy_send_server": true, 00:15:19.928 "enable_zerocopy_send_client": false, 00:15:19.928 "zerocopy_threshold": 0, 00:15:19.928 "tls_version": 0, 00:15:19.928 "enable_ktls": false 00:15:19.928 } 00:15:19.928 }, 00:15:19.928 { 00:15:19.928 "method": "sock_impl_set_options", 00:15:19.928 "params": { 00:15:19.928 "impl_name": "posix", 00:15:19.928 "recv_buf_size": 2097152, 00:15:19.928 "send_buf_size": 2097152, 00:15:19.928 "enable_recv_pipe": true, 00:15:19.928 "enable_quickack": false, 00:15:19.928 "enable_placement_id": 0, 00:15:19.928 "enable_zerocopy_send_server": true, 00:15:19.928 "enable_zerocopy_send_client": false, 00:15:19.928 "zerocopy_threshold": 0, 00:15:19.928 "tls_version": 0, 00:15:19.928 "enable_ktls": false 00:15:19.928 } 00:15:19.928 }, 00:15:19.928 { 00:15:19.928 "method": "sock_impl_set_options", 00:15:19.928 "params": { 00:15:19.928 "impl_name": "uring", 00:15:19.928 "recv_buf_size": 2097152, 00:15:19.928 "send_buf_size": 2097152, 00:15:19.928 "enable_recv_pipe": true, 00:15:19.928 "enable_quickack": false, 00:15:19.928 "enable_placement_id": 0, 00:15:19.928 "enable_zerocopy_send_server": false, 00:15:19.928 "enable_zerocopy_send_client": false, 00:15:19.928 "zerocopy_threshold": 0, 00:15:19.928 "tls_version": 0, 00:15:19.928 "enable_ktls": false 00:15:19.928 } 00:15:19.928 } 00:15:19.928 ] 00:15:19.928 }, 00:15:19.928 { 00:15:19.928 "subsystem": "vmd", 00:15:19.928 "config": [] 00:15:19.928 }, 00:15:19.928 { 00:15:19.928 "subsystem": "accel", 00:15:19.928 "config": [ 00:15:19.928 { 00:15:19.928 "method": "accel_set_options", 00:15:19.928 "params": { 00:15:19.928 "small_cache_size": 128, 00:15:19.928 "large_cache_size": 16, 00:15:19.928 "task_count": 2048, 00:15:19.928 "sequence_count": 2048, 00:15:19.928 "buf_count": 2048 00:15:19.928 } 00:15:19.928 } 00:15:19.928 ] 00:15:19.928 }, 00:15:19.928 { 00:15:19.928 "subsystem": "bdev", 00:15:19.928 "config": [ 00:15:19.928 { 00:15:19.928 "method": "bdev_set_options", 00:15:19.928 "params": { 00:15:19.928 "bdev_io_pool_size": 65535, 00:15:19.928 "bdev_io_cache_size": 256, 00:15:19.928 "bdev_auto_examine": true, 00:15:19.928 "iobuf_small_cache_size": 128, 00:15:19.929 "iobuf_large_cache_size": 16 00:15:19.929 } 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "method": "bdev_raid_set_options", 00:15:19.929 "params": { 00:15:19.929 "process_window_size_kb": 1024, 00:15:19.929 "process_max_bandwidth_mb_sec": 0 00:15:19.929 } 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "method": "bdev_iscsi_set_options", 00:15:19.929 "params": { 00:15:19.929 "timeout_sec": 30 00:15:19.929 } 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "method": "bdev_nvme_set_options", 00:15:19.929 "params": { 00:15:19.929 "action_on_timeout": "none", 00:15:19.929 "timeout_us": 0, 00:15:19.929 "timeout_admin_us": 0, 00:15:19.929 "keep_alive_timeout_ms": 10000, 00:15:19.929 "arbitration_burst": 0, 00:15:19.929 "low_priority_weight": 0, 00:15:19.929 "medium_priority_weight": 0, 00:15:19.929 "high_priority_weight": 0, 00:15:19.929 "nvme_adminq_poll_period_us": 10000, 00:15:19.929 "nvme_ioq_poll_period_us": 0, 00:15:19.929 "io_queue_requests": 0, 00:15:19.929 "delay_cmd_submit": true, 00:15:19.929 "transport_retry_count": 4, 00:15:19.929 "bdev_retry_count": 3, 00:15:19.929 "transport_ack_timeout": 0, 00:15:19.929 "ctrlr_loss_timeout_sec": 0, 00:15:19.929 "reconnect_delay_sec": 0, 00:15:19.929 "fast_io_fail_timeout_sec": 0, 00:15:19.929 "disable_auto_failback": false, 00:15:19.929 "generate_uuids": false, 00:15:19.929 "transport_tos": 0, 00:15:19.929 "nvme_error_stat": false, 00:15:19.929 "rdma_srq_size": 0, 00:15:19.929 "io_path_stat": false, 00:15:19.929 "allow_accel_sequence": false, 00:15:19.929 "rdma_max_cq_size": 0, 00:15:19.929 "rdma_cm_event_timeout_ms": 0, 00:15:19.929 "dhchap_digests": [ 00:15:19.929 "sha256", 00:15:19.929 "sha384", 00:15:19.929 "sha512" 00:15:19.929 ], 00:15:19.929 "dhchap_dhgroups": [ 00:15:19.929 "null", 00:15:19.929 "ffdhe2048", 00:15:19.929 "ffdhe3072", 00:15:19.929 "ffdhe4096", 00:15:19.929 "ffdhe6144", 00:15:19.929 "ffdhe8192" 00:15:19.929 ] 00:15:19.929 } 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "method": "bdev_nvme_set_hotplug", 00:15:19.929 "params": { 00:15:19.929 "period_us": 100000, 00:15:19.929 "enable": false 00:15:19.929 } 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "method": "bdev_malloc_create", 00:15:19.929 "params": { 00:15:19.929 "name": "malloc0", 00:15:19.929 "num_blocks": 8192, 00:15:19.929 "block_size": 4096, 00:15:19.929 "physical_block_size": 4096, 00:15:19.929 "uuid": "b1c3163e-d911-4538-a1e6-ae4db2f534f8", 00:15:19.929 "optimal_io_boundary": 0, 00:15:19.929 "md_size": 0, 00:15:19.929 "dif_type": 0, 00:15:19.929 "dif_is_head_of_md": false, 00:15:19.929 "dif_pi_format": 0 00:15:19.929 } 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "method": "bdev_wait_for_examine" 00:15:19.929 } 00:15:19.929 ] 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "subsystem": "nbd", 00:15:19.929 "config": [] 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "subsystem": "scheduler", 00:15:19.929 "config": [ 00:15:19.929 { 00:15:19.929 "method": "framework_set_scheduler", 00:15:19.929 "params": { 00:15:19.929 "name": "static" 00:15:19.929 } 00:15:19.929 } 00:15:19.929 ] 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "subsystem": "nvmf", 00:15:19.929 "config": [ 00:15:19.929 { 00:15:19.929 "method": "nvmf_set_config", 00:15:19.929 "params": { 00:15:19.929 "discovery_filter": "match_any", 00:15:19.929 "admin_cmd_passthru": { 00:15:19.929 "identify_ctrlr": false 00:15:19.929 }, 00:15:19.929 "dhchap_digests": [ 00:15:19.929 "sha256", 00:15:19.929 "sha384", 00:15:19.929 "sha512" 00:15:19.929 ], 00:15:19.929 "dhchap_dhgroups": [ 00:15:19.929 "null", 00:15:19.929 "ffdhe2048", 00:15:19.929 "ffdhe3072", 00:15:19.929 "ffdhe4096", 00:15:19.929 "ffdhe6144", 00:15:19.929 "ffdhe8192" 00:15:19.929 ] 00:15:19.929 } 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "method": "nvmf_set_max_subsystems", 00:15:19.929 "params": { 00:15:19.929 "max_subsystems": 1024 00:15:19.929 } 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "method": "nvmf_set_crdt", 00:15:19.929 "params": { 00:15:19.929 "crdt1": 0, 00:15:19.929 "crdt2": 0, 00:15:19.929 "crdt3": 0 00:15:19.929 } 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "method": "nvmf_create_transport", 00:15:19.929 "params": { 00:15:19.929 "trtype": "TCP", 00:15:19.929 "max_queue_depth": 128, 00:15:19.929 "max_io_qpairs_per_ctrlr": 127, 00:15:19.929 "in_capsule_data_size": 4096, 00:15:19.929 "max_io_size": 131072, 00:15:19.929 "io_unit_size": 131072, 00:15:19.929 "max_aq_depth": 128, 00:15:19.929 "num_shared_buffers": 511, 00:15:19.929 "buf_cache_size": 4294967295, 00:15:19.929 "dif_insert_or_strip": false, 00:15:19.929 "zcopy": false, 00:15:19.929 "c2h_success": false, 00:15:19.929 "sock_priority": 0, 00:15:19.929 "abort_timeout_sec": 1, 00:15:19.929 "ack_timeout": 0, 00:15:19.929 "data_wr_pool_size": 0 00:15:19.929 } 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "method": "nvmf_create_subsystem", 00:15:19.929 "params": { 00:15:19.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.929 "allow_any_host": false, 00:15:19.929 "serial_number": "SPDK00000000000001", 00:15:19.929 "model_number": "SPDK bdev Controller", 00:15:19.929 "max_namespaces": 10, 00:15:19.929 "min_cntlid": 1, 00:15:19.929 "max_cntlid": 65519, 00:15:19.929 "ana_reporting": false 00:15:19.929 } 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "method": "nvmf_subsystem_add_host", 00:15:19.929 "params": { 00:15:19.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.929 "host": "nqn.2016-06.io.spdk:host1", 00:15:19.929 "psk": "key0" 00:15:19.929 } 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "method": "nvmf_subsystem_add_ns", 00:15:19.929 "params": { 00:15:19.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.929 "namespace": { 00:15:19.929 "nsid": 1, 00:15:19.929 "bdev_name": "malloc0", 00:15:19.929 "nguid": "B1C3163ED9114538A1E6AE4DB2F534F8", 00:15:19.929 "uuid": "b1c3163e-d911-4538-a1e6-ae4db2f534f8", 00:15:19.929 "no_auto_visible": false 00:15:19.929 } 00:15:19.929 } 00:15:19.929 }, 00:15:19.929 { 00:15:19.929 "method": "nvmf_subsystem_add_listener", 00:15:19.929 "params": { 00:15:19.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.929 "listen_address": { 00:15:19.929 "trtype": "TCP", 00:15:19.929 "adrfam": "IPv4", 00:15:19.929 "traddr": "10.0.0.3", 00:15:19.929 "trsvcid": "4420" 00:15:19.929 }, 00:15:19.929 "secure_channel": true 00:15:19.929 } 00:15:19.929 } 00:15:19.929 ] 00:15:19.929 } 00:15:19.929 ] 00:15:19.929 }' 00:15:19.929 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:20.495 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:20.495 "subsystems": [ 00:15:20.495 { 00:15:20.495 "subsystem": "keyring", 00:15:20.495 "config": [ 00:15:20.495 { 00:15:20.495 "method": "keyring_file_add_key", 00:15:20.495 "params": { 00:15:20.495 "name": "key0", 00:15:20.495 "path": "/tmp/tmp.4e8bETFlTP" 00:15:20.495 } 00:15:20.495 } 00:15:20.495 ] 00:15:20.495 }, 00:15:20.495 { 00:15:20.495 "subsystem": "iobuf", 00:15:20.495 "config": [ 00:15:20.495 { 00:15:20.495 "method": "iobuf_set_options", 00:15:20.495 "params": { 00:15:20.495 "small_pool_count": 8192, 00:15:20.495 "large_pool_count": 1024, 00:15:20.495 "small_bufsize": 8192, 00:15:20.495 "large_bufsize": 135168, 00:15:20.495 "enable_numa": false 00:15:20.495 } 00:15:20.495 } 00:15:20.495 ] 00:15:20.495 }, 00:15:20.495 { 00:15:20.495 "subsystem": "sock", 00:15:20.495 "config": [ 00:15:20.495 { 00:15:20.495 "method": "sock_set_default_impl", 00:15:20.495 "params": { 00:15:20.495 "impl_name": "uring" 00:15:20.495 } 00:15:20.495 }, 00:15:20.495 { 00:15:20.495 "method": "sock_impl_set_options", 00:15:20.495 "params": { 00:15:20.495 "impl_name": "ssl", 00:15:20.495 "recv_buf_size": 4096, 00:15:20.495 "send_buf_size": 4096, 00:15:20.495 "enable_recv_pipe": true, 00:15:20.495 "enable_quickack": false, 00:15:20.495 "enable_placement_id": 0, 00:15:20.495 "enable_zerocopy_send_server": true, 00:15:20.495 "enable_zerocopy_send_client": false, 00:15:20.495 "zerocopy_threshold": 0, 00:15:20.495 "tls_version": 0, 00:15:20.495 "enable_ktls": false 00:15:20.495 } 00:15:20.495 }, 00:15:20.495 { 00:15:20.495 "method": "sock_impl_set_options", 00:15:20.495 "params": { 00:15:20.495 "impl_name": "posix", 00:15:20.495 "recv_buf_size": 2097152, 00:15:20.495 "send_buf_size": 2097152, 00:15:20.495 "enable_recv_pipe": true, 00:15:20.495 "enable_quickack": false, 00:15:20.495 "enable_placement_id": 0, 00:15:20.495 "enable_zerocopy_send_server": true, 00:15:20.495 "enable_zerocopy_send_client": false, 00:15:20.495 "zerocopy_threshold": 0, 00:15:20.495 "tls_version": 0, 00:15:20.495 "enable_ktls": false 00:15:20.495 } 00:15:20.495 }, 00:15:20.495 { 00:15:20.495 "method": "sock_impl_set_options", 00:15:20.495 "params": { 00:15:20.495 "impl_name": "uring", 00:15:20.495 "recv_buf_size": 2097152, 00:15:20.495 "send_buf_size": 2097152, 00:15:20.495 "enable_recv_pipe": true, 00:15:20.495 "enable_quickack": false, 00:15:20.495 "enable_placement_id": 0, 00:15:20.495 "enable_zerocopy_send_server": false, 00:15:20.495 "enable_zerocopy_send_client": false, 00:15:20.495 "zerocopy_threshold": 0, 00:15:20.495 "tls_version": 0, 00:15:20.495 "enable_ktls": false 00:15:20.495 } 00:15:20.495 } 00:15:20.495 ] 00:15:20.495 }, 00:15:20.495 { 00:15:20.495 "subsystem": "vmd", 00:15:20.495 "config": [] 00:15:20.495 }, 00:15:20.495 { 00:15:20.495 "subsystem": "accel", 00:15:20.495 "config": [ 00:15:20.495 { 00:15:20.495 "method": "accel_set_options", 00:15:20.495 "params": { 00:15:20.495 "small_cache_size": 128, 00:15:20.495 "large_cache_size": 16, 00:15:20.495 "task_count": 2048, 00:15:20.495 "sequence_count": 2048, 00:15:20.495 "buf_count": 2048 00:15:20.495 } 00:15:20.495 } 00:15:20.495 ] 00:15:20.495 }, 00:15:20.495 { 00:15:20.495 "subsystem": "bdev", 00:15:20.495 "config": [ 00:15:20.495 { 00:15:20.495 "method": "bdev_set_options", 00:15:20.495 "params": { 00:15:20.495 "bdev_io_pool_size": 65535, 00:15:20.495 "bdev_io_cache_size": 256, 00:15:20.495 "bdev_auto_examine": true, 00:15:20.495 "iobuf_small_cache_size": 128, 00:15:20.495 "iobuf_large_cache_size": 16 00:15:20.495 } 00:15:20.495 }, 00:15:20.495 { 00:15:20.495 "method": "bdev_raid_set_options", 00:15:20.495 "params": { 00:15:20.495 "process_window_size_kb": 1024, 00:15:20.495 "process_max_bandwidth_mb_sec": 0 00:15:20.495 } 00:15:20.495 }, 00:15:20.495 { 00:15:20.495 "method": "bdev_iscsi_set_options", 00:15:20.496 "params": { 00:15:20.496 "timeout_sec": 30 00:15:20.496 } 00:15:20.496 }, 00:15:20.496 { 00:15:20.496 "method": "bdev_nvme_set_options", 00:15:20.496 "params": { 00:15:20.496 "action_on_timeout": "none", 00:15:20.496 "timeout_us": 0, 00:15:20.496 "timeout_admin_us": 0, 00:15:20.496 "keep_alive_timeout_ms": 10000, 00:15:20.496 "arbitration_burst": 0, 00:15:20.496 "low_priority_weight": 0, 00:15:20.496 "medium_priority_weight": 0, 00:15:20.496 "high_priority_weight": 0, 00:15:20.496 "nvme_adminq_poll_period_us": 10000, 00:15:20.496 "nvme_ioq_poll_period_us": 0, 00:15:20.496 "io_queue_requests": 512, 00:15:20.496 "delay_cmd_submit": true, 00:15:20.496 "transport_retry_count": 4, 00:15:20.496 "bdev_retry_count": 3, 00:15:20.496 "transport_ack_timeout": 0, 00:15:20.496 "ctrlr_loss_timeout_sec": 0, 00:15:20.496 "reconnect_delay_sec": 0, 00:15:20.496 "fast_io_fail_timeout_sec": 0, 00:15:20.496 "disable_auto_failback": false, 00:15:20.496 "generate_uuids": false, 00:15:20.496 "transport_tos": 0, 00:15:20.496 "nvme_error_stat": false, 00:15:20.496 "rdma_srq_size": 0, 00:15:20.496 "io_path_stat": false, 00:15:20.496 "allow_accel_sequence": false, 00:15:20.496 "rdma_max_cq_size": 0, 00:15:20.496 "rdma_cm_event_timeout_ms": 0, 00:15:20.496 "dhchap_digests": [ 00:15:20.496 "sha256", 00:15:20.496 "sha384", 00:15:20.496 "sha512" 00:15:20.496 ], 00:15:20.496 "dhchap_dhgroups": [ 00:15:20.496 "null", 00:15:20.496 "ffdhe2048", 00:15:20.496 "ffdhe3072", 00:15:20.496 "ffdhe4096", 00:15:20.496 "ffdhe6144", 00:15:20.496 "ffdhe8192" 00:15:20.496 ] 00:15:20.496 } 00:15:20.496 }, 00:15:20.496 { 00:15:20.496 "method": "bdev_nvme_attach_controller", 00:15:20.496 "params": { 00:15:20.496 "name": "TLSTEST", 00:15:20.496 "trtype": "TCP", 00:15:20.496 "adrfam": "IPv4", 00:15:20.496 "traddr": "10.0.0.3", 00:15:20.496 "trsvcid": "4420", 00:15:20.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.496 "prchk_reftag": false, 00:15:20.496 "prchk_guard": false, 00:15:20.496 "ctrlr_loss_timeout_sec": 0, 00:15:20.496 "reconnect_delay_sec": 0, 00:15:20.496 "fast_io_fail_timeout_sec": 0, 00:15:20.496 "psk": "key0", 00:15:20.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:20.496 "hdgst": false, 00:15:20.496 "ddgst": false, 00:15:20.496 "multipath": "multipath" 00:15:20.496 } 00:15:20.496 }, 00:15:20.496 { 00:15:20.496 "method": "bdev_nvme_set_hotplug", 00:15:20.496 "params": { 00:15:20.496 "period_us": 100000, 00:15:20.496 "enable": false 00:15:20.496 } 00:15:20.496 }, 00:15:20.496 { 00:15:20.496 "method": "bdev_wait_for_examine" 00:15:20.496 } 00:15:20.496 ] 00:15:20.496 }, 00:15:20.496 { 00:15:20.496 "subsystem": "nbd", 00:15:20.496 "config": [] 00:15:20.496 } 00:15:20.496 ] 00:15:20.496 }' 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72218 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72218 ']' 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72218 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72218 00:15:20.496 killing process with pid 72218 00:15:20.496 Received shutdown signal, test time was about 10.000000 seconds 00:15:20.496 00:15:20.496 Latency(us) 00:15:20.496 [2024-11-15T10:38:45.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.496 [2024-11-15T10:38:45.994Z] =================================================================================================================== 00:15:20.496 [2024-11-15T10:38:45.994Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72218' 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72218 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72218 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72169 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72169 ']' 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72169 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:20.496 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72169 00:15:20.754 killing process with pid 72169 00:15:20.754 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:20.754 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:20.754 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72169' 00:15:20.754 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72169 00:15:20.754 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72169 00:15:20.754 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:20.754 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:20.754 "subsystems": [ 00:15:20.754 { 00:15:20.754 "subsystem": "keyring", 00:15:20.754 "config": [ 00:15:20.754 { 00:15:20.754 "method": "keyring_file_add_key", 00:15:20.754 "params": { 00:15:20.754 "name": "key0", 00:15:20.754 "path": "/tmp/tmp.4e8bETFlTP" 00:15:20.754 } 00:15:20.754 } 00:15:20.754 ] 00:15:20.754 }, 00:15:20.754 { 00:15:20.754 "subsystem": "iobuf", 00:15:20.754 "config": [ 00:15:20.754 { 00:15:20.754 "method": "iobuf_set_options", 00:15:20.754 "params": { 00:15:20.754 "small_pool_count": 8192, 00:15:20.754 "large_pool_count": 1024, 00:15:20.754 "small_bufsize": 8192, 00:15:20.754 "large_bufsize": 135168, 00:15:20.754 "enable_numa": false 00:15:20.754 } 00:15:20.754 } 00:15:20.754 ] 00:15:20.754 }, 00:15:20.754 { 00:15:20.754 "subsystem": "sock", 00:15:20.754 "config": [ 00:15:20.754 { 00:15:20.754 "method": "sock_set_default_impl", 00:15:20.754 "params": { 00:15:20.754 "impl_name": "uring" 00:15:20.754 } 00:15:20.754 }, 00:15:20.754 { 00:15:20.754 "method": "sock_impl_set_options", 00:15:20.754 "params": { 00:15:20.754 "impl_name": "ssl", 00:15:20.754 "recv_buf_size": 4096, 00:15:20.754 "send_buf_size": 4096, 00:15:20.754 "enable_recv_pipe": true, 00:15:20.754 "enable_quickack": false, 00:15:20.754 "enable_placement_id": 0, 00:15:20.754 "enable_zerocopy_send_server": true, 00:15:20.754 "enable_zerocopy_send_client": false, 00:15:20.754 "zerocopy_threshold": 0, 00:15:20.754 "tls_version": 0, 00:15:20.754 "enable_ktls": false 00:15:20.754 } 00:15:20.754 }, 00:15:20.754 { 00:15:20.754 "method": "sock_impl_set_options", 00:15:20.754 "params": { 00:15:20.754 "impl_name": "posix", 00:15:20.754 "recv_buf_size": 2097152, 00:15:20.754 "send_buf_size": 2097152, 00:15:20.754 "enable_recv_pipe": true, 00:15:20.754 "enable_quickack": false, 00:15:20.754 "enable_placement_id": 0, 00:15:20.754 "enable_zerocopy_send_server": true, 00:15:20.754 "enable_zerocopy_send_client": false, 00:15:20.754 "zerocopy_threshold": 0, 00:15:20.754 "tls_version": 0, 00:15:20.754 "enable_ktls": false 00:15:20.754 } 00:15:20.754 }, 00:15:20.754 { 00:15:20.754 "method": "sock_impl_set_options", 00:15:20.754 "params": { 00:15:20.754 "impl_name": "uring", 00:15:20.754 "recv_buf_size": 2097152, 00:15:20.754 "send_buf_size": 2097152, 00:15:20.754 "enable_recv_pipe": true, 00:15:20.754 "enable_quickack": false, 00:15:20.754 "enable_placement_id": 0, 00:15:20.754 "enable_zerocopy_send_server": false, 00:15:20.754 "enable_zerocopy_send_client": false, 00:15:20.754 "zerocopy_threshold": 0, 00:15:20.754 "tls_version": 0, 00:15:20.754 "enable_ktls": false 00:15:20.754 } 00:15:20.754 } 00:15:20.754 ] 00:15:20.754 }, 00:15:20.754 { 00:15:20.754 "subsystem": "vmd", 00:15:20.754 "config": [] 00:15:20.754 }, 00:15:20.754 { 00:15:20.754 "subsystem": "accel", 00:15:20.754 "config": [ 00:15:20.754 { 00:15:20.754 "method": "accel_set_options", 00:15:20.754 "params": { 00:15:20.755 "small_cache_size": 128, 00:15:20.755 "large_cache_size": 16, 00:15:20.755 "task_count": 2048, 00:15:20.755 "sequence_count": 2048, 00:15:20.755 "buf_count": 2048 00:15:20.755 } 00:15:20.755 } 00:15:20.755 ] 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "subsystem": "bdev", 00:15:20.755 "config": [ 00:15:20.755 { 00:15:20.755 "method": "bdev_set_options", 00:15:20.755 "params": { 00:15:20.755 "bdev_io_pool_size": 65535, 00:15:20.755 "bdev_io_cache_size": 256, 00:15:20.755 "bdev_auto_examine": true, 00:15:20.755 "iobuf_small_cache_size": 128, 00:15:20.755 "iobuf_large_cache_size": 16 00:15:20.755 } 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "method": "bdev_raid_set_options", 00:15:20.755 "params": { 00:15:20.755 "process_window_size_kb": 1024, 00:15:20.755 "process_max_bandwidth_mb_sec": 0 00:15:20.755 } 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "method": "bdev_iscsi_set_options", 00:15:20.755 "params": { 00:15:20.755 "timeout_sec": 30 00:15:20.755 } 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "method": "bdev_nvme_set_options", 00:15:20.755 "params": { 00:15:20.755 "action_on_timeout": "none", 00:15:20.755 "timeout_us": 0, 00:15:20.755 "timeout_admin_us": 0, 00:15:20.755 "keep_alive_timeout_ms": 10000, 00:15:20.755 "arbitration_burst": 0, 00:15:20.755 "low_priority_weight": 0, 00:15:20.755 "medium_priority_weight": 0, 00:15:20.755 "high_priority_weight": 0, 00:15:20.755 "nvme_adminq_poll_period_us": 10000, 00:15:20.755 "nvme_ioq_poll_period_us": 0, 00:15:20.755 "io_queue_requests": 0, 00:15:20.755 "delay_cmd_submit": true, 00:15:20.755 "transport_retry_count": 4, 00:15:20.755 "bdev_retry_count": 3, 00:15:20.755 "transport_ack_timeout": 0, 00:15:20.755 "ctrlr_loss_timeout_sec": 0, 00:15:20.755 "reconnect_delay_sec": 0, 00:15:20.755 "fast_io_fail_timeout_sec": 0, 00:15:20.755 "disable_auto_failback": false, 00:15:20.755 "generate_uuids": false, 00:15:20.755 "transport_tos": 0, 00:15:20.755 "nvme_error_stat": false, 00:15:20.755 "rdma_srq_size": 0, 00:15:20.755 "io_path_stat": false, 00:15:20.755 "allow_accel_sequence": false, 00:15:20.755 "rdma_max_cq_size": 0, 00:15:20.755 "rdma_cm_event_timeout_ms": 0, 00:15:20.755 "dhchap_digests": [ 00:15:20.755 "sha256", 00:15:20.755 "sha384", 00:15:20.755 "sha512" 00:15:20.755 ], 00:15:20.755 "dhchap_dhgroups": [ 00:15:20.755 "null", 00:15:20.755 "ffdhe2048", 00:15:20.755 "ffdhe3072", 00:15:20.755 "ffdhe4096", 00:15:20.755 "ffdhe6144", 00:15:20.755 "ffdhe8192" 00:15:20.755 ] 00:15:20.755 } 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "method": "bdev_nvme_set_hotplug", 00:15:20.755 "params": { 00:15:20.755 "period_us": 100000, 00:15:20.755 "enable": false 00:15:20.755 } 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "method": "bdev_malloc_create", 00:15:20.755 "params": { 00:15:20.755 "name": "malloc0", 00:15:20.755 "num_blocks": 8192, 00:15:20.755 "block_size": 4096, 00:15:20.755 "physical_block_size": 4096, 00:15:20.755 "uuid": "b1c3163e-d911-4538-a1e6-ae4db2f534f8", 00:15:20.755 "optimal_io_boundary": 0, 00:15:20.755 "md_size": 0, 00:15:20.755 "dif_type": 0, 00:15:20.755 "dif_is_head_of_md": false, 00:15:20.755 "dif_pi_format": 0 00:15:20.755 } 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "method": "bdev_wait_for_examine" 00:15:20.755 } 00:15:20.755 ] 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "subsystem": "nbd", 00:15:20.755 "config": [] 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "subsystem": "scheduler", 00:15:20.755 "config": [ 00:15:20.755 { 00:15:20.755 "method": "framework_set_scheduler", 00:15:20.755 "params": { 00:15:20.755 "name": "static" 00:15:20.755 } 00:15:20.755 } 00:15:20.755 ] 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "subsystem": "nvmf", 00:15:20.755 "config": [ 00:15:20.755 { 00:15:20.755 "method": "nvmf_set_config", 00:15:20.755 "params": { 00:15:20.755 "discovery_filter": "match_any", 00:15:20.755 "admin_cmd_passthru": { 00:15:20.755 "identify_ctrlr": false 00:15:20.755 }, 00:15:20.755 "dhchap_digests": [ 00:15:20.755 "sha256", 00:15:20.755 "sha384", 00:15:20.755 "sha512" 00:15:20.755 ], 00:15:20.755 "dhchap_dhgroups": [ 00:15:20.755 "null", 00:15:20.755 "ffdhe2048", 00:15:20.755 "ffdhe3072", 00:15:20.755 "ffdhe4096", 00:15:20.755 "ffdhe6144", 00:15:20.755 "ffdhe8192" 00:15:20.755 ] 00:15:20.755 } 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "method": "nvmf_set_max_subsystems", 00:15:20.755 "params": { 00:15:20.755 "max_subsystems": 1024 00:15:20.755 } 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "method": "nvmf_set_crdt", 00:15:20.755 "params": { 00:15:20.755 "crdt1": 0, 00:15:20.755 "crdt2": 0, 00:15:20.755 "crdt3": 0 00:15:20.755 } 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "method": "nvmf_create_transport", 00:15:20.755 "params": { 00:15:20.755 "trtype": "TCP", 00:15:20.755 "max_queue_depth": 128, 00:15:20.755 "max_io_qpairs_per_ctrlr": 127, 00:15:20.755 "in_capsule_data_size": 4096, 00:15:20.755 "max_io_size": 131072, 00:15:20.755 "io_unit_size": 131072, 00:15:20.755 "max_aq_depth": 128, 00:15:20.755 "num_shared_buffers": 511, 00:15:20.755 "buf_cache_size": 4294967295, 00:15:20.755 "dif_insert_or_strip": false, 00:15:20.755 "zcopy": false, 00:15:20.755 "c2h_success": false, 00:15:20.755 "sock_priority": 0, 00:15:20.755 "abort_timeout_sec": 1, 00:15:20.755 "ack_timeout": 0, 00:15:20.755 "data_wr_pool_size": 0 00:15:20.755 } 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "method": "nvmf_create_subsystem", 00:15:20.755 "params": { 00:15:20.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.755 "allow_any_host": false, 00:15:20.755 "serial_number": "SPDK00000000000001", 00:15:20.755 "model_number": "SPDK bdev Controller", 00:15:20.755 "max_namespaces": 10, 00:15:20.755 "min_cntlid": 1, 00:15:20.755 "max_cntlid": 65519, 00:15:20.755 "ana_reporting": false 00:15:20.755 } 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "method": "nvmf_subsystem_add_host", 00:15:20.755 "params": { 00:15:20.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.755 "host": "nqn.2016-06.io.spdk:host1", 00:15:20.755 "psk": "key0" 00:15:20.755 } 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "method": "nvmf_subsystem_add_ns", 00:15:20.755 "params": { 00:15:20.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.755 "namespace": { 00:15:20.755 "nsid": 1, 00:15:20.755 "bdev_name": "malloc0", 00:15:20.755 "nguid": "B1C3163ED9114538A1E6AE4DB2F534F8", 00:15:20.755 "uuid": "b1c3163e-d911-4538-a1e6-ae4db2f534f8", 00:15:20.755 "no_auto_visible": false 00:15:20.755 } 00:15:20.755 } 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "method": "nvmf_subsystem_add_listener", 00:15:20.755 "params": { 00:15:20.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.755 "listen_address": { 00:15:20.755 "trtype": "TCP", 00:15:20.755 "adrfam": "IPv4", 00:15:20.755 "traddr": "10.0.0.3", 00:15:20.755 "trsvcid": "4420" 00:15:20.755 }, 00:15:20.755 "secure_channel": true 00:15:20.755 } 00:15:20.755 } 00:15:20.755 ] 00:15:20.756 } 00:15:20.756 ] 00:15:20.756 }' 00:15:20.756 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:20.756 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:20.756 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.756 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72273 00:15:20.756 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:20.756 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72273 00:15:20.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.756 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72273 ']' 00:15:20.756 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.756 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:20.756 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.756 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:20.756 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.012 [2024-11-15 10:38:46.296243] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:21.012 [2024-11-15 10:38:46.296579] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.012 [2024-11-15 10:38:46.444794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.012 [2024-11-15 10:38:46.503471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.012 [2024-11-15 10:38:46.503548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.012 [2024-11-15 10:38:46.503562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.012 [2024-11-15 10:38:46.503571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.012 [2024-11-15 10:38:46.503578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.012 [2024-11-15 10:38:46.504056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.270 [2024-11-15 10:38:46.671828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:21.270 [2024-11-15 10:38:46.752874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.528 [2024-11-15 10:38:46.784809] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:21.528 [2024-11-15 10:38:46.785075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:22.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72305 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72305 /var/tmp/bdevperf.sock 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72305 ']' 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:22.093 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:22.093 "subsystems": [ 00:15:22.093 { 00:15:22.093 "subsystem": "keyring", 00:15:22.093 "config": [ 00:15:22.093 { 00:15:22.093 "method": "keyring_file_add_key", 00:15:22.093 "params": { 00:15:22.093 "name": "key0", 00:15:22.093 "path": "/tmp/tmp.4e8bETFlTP" 00:15:22.093 } 00:15:22.093 } 00:15:22.093 ] 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "subsystem": "iobuf", 00:15:22.093 "config": [ 00:15:22.093 { 00:15:22.093 "method": "iobuf_set_options", 00:15:22.093 "params": { 00:15:22.093 "small_pool_count": 8192, 00:15:22.093 "large_pool_count": 1024, 00:15:22.093 "small_bufsize": 8192, 00:15:22.093 "large_bufsize": 135168, 00:15:22.093 "enable_numa": false 00:15:22.093 } 00:15:22.093 } 00:15:22.093 ] 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "subsystem": "sock", 00:15:22.093 "config": [ 00:15:22.093 { 00:15:22.093 "method": "sock_set_default_impl", 00:15:22.093 "params": { 00:15:22.093 "impl_name": "uring" 00:15:22.093 } 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "method": "sock_impl_set_options", 00:15:22.093 "params": { 00:15:22.093 "impl_name": "ssl", 00:15:22.093 "recv_buf_size": 4096, 00:15:22.093 "send_buf_size": 4096, 00:15:22.093 "enable_recv_pipe": true, 00:15:22.093 "enable_quickack": false, 00:15:22.093 "enable_placement_id": 0, 00:15:22.093 "enable_zerocopy_send_server": true, 00:15:22.093 "enable_zerocopy_send_client": false, 00:15:22.093 "zerocopy_threshold": 0, 00:15:22.093 "tls_version": 0, 00:15:22.093 "enable_ktls": false 00:15:22.093 } 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "method": "sock_impl_set_options", 00:15:22.093 "params": { 00:15:22.093 "impl_name": "posix", 00:15:22.093 "recv_buf_size": 2097152, 00:15:22.093 "send_buf_size": 2097152, 00:15:22.093 "enable_recv_pipe": true, 00:15:22.093 "enable_quickack": false, 00:15:22.093 "enable_placement_id": 0, 00:15:22.093 "enable_zerocopy_send_server": true, 00:15:22.093 "enable_zerocopy_send_client": false, 00:15:22.093 "zerocopy_threshold": 0, 00:15:22.093 "tls_version": 0, 00:15:22.093 "enable_ktls": false 00:15:22.093 } 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "method": "sock_impl_set_options", 00:15:22.093 "params": { 00:15:22.093 "impl_name": "uring", 00:15:22.093 "recv_buf_size": 2097152, 00:15:22.093 "send_buf_size": 2097152, 00:15:22.093 "enable_recv_pipe": true, 00:15:22.093 "enable_quickack": false, 00:15:22.093 "enable_placement_id": 0, 00:15:22.093 "enable_zerocopy_send_server": false, 00:15:22.093 "enable_zerocopy_send_client": false, 00:15:22.093 "zerocopy_threshold": 0, 00:15:22.093 "tls_version": 0, 00:15:22.093 "enable_ktls": false 00:15:22.093 } 00:15:22.093 } 00:15:22.093 ] 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "subsystem": "vmd", 00:15:22.093 "config": [] 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "subsystem": "accel", 00:15:22.093 "config": [ 00:15:22.093 { 00:15:22.093 "method": "accel_set_options", 00:15:22.093 "params": { 00:15:22.093 "small_cache_size": 128, 00:15:22.093 "large_cache_size": 16, 00:15:22.093 "task_count": 2048, 00:15:22.093 "sequence_count": 2048, 00:15:22.093 "buf_count": 2048 00:15:22.093 } 00:15:22.093 } 00:15:22.093 ] 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "subsystem": "bdev", 00:15:22.093 "config": [ 00:15:22.093 { 00:15:22.093 "method": "bdev_set_options", 00:15:22.093 "params": { 00:15:22.093 "bdev_io_pool_size": 65535, 00:15:22.093 "bdev_io_cache_size": 256, 00:15:22.093 "bdev_auto_examine": true, 00:15:22.093 "iobuf_small_cache_size": 128, 00:15:22.093 "iobuf_large_cache_size": 16 00:15:22.093 } 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "method": "bdev_raid_set_options", 00:15:22.093 "params": { 00:15:22.093 "process_window_size_kb": 1024, 00:15:22.093 "process_max_bandwidth_mb_sec": 0 00:15:22.093 } 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "method": "bdev_iscsi_set_options", 00:15:22.093 "params": { 00:15:22.093 "timeout_sec": 30 00:15:22.093 } 00:15:22.093 }, 00:15:22.093 { 00:15:22.093 "method": "bdev_nvme_set_options", 00:15:22.093 "params": { 00:15:22.093 "action_on_timeout": "none", 00:15:22.093 "timeout_us": 0, 00:15:22.093 "timeout_admin_us": 0, 00:15:22.093 "keep_alive_timeout_ms": 10000, 00:15:22.093 "arbitration_burst": 0, 00:15:22.093 "low_priority_weight": 0, 00:15:22.093 "medium_priority_weight": 0, 00:15:22.093 "high_priority_weight": 0, 00:15:22.093 "nvme_adminq_poll_period_us": 10000, 00:15:22.093 "nvme_ioq_poll_period_us": 0, 00:15:22.093 "io_queue_requests": 512, 00:15:22.093 "delay_cmd_submit": true, 00:15:22.093 "transport_retry_count": 4, 00:15:22.093 "bdev_retry_count": 3, 00:15:22.093 "transport_ack_timeout": 0, 00:15:22.093 "ctrlr_loss_timeout_sec": 0, 00:15:22.093 "reconnect_delay_sec": 0, 00:15:22.093 "fast_io_fail_timeout_sec": 0, 00:15:22.093 "disable_auto_failback": false, 00:15:22.093 "generate_uuids": false, 00:15:22.093 "transport_tos": 0, 00:15:22.093 "nvme_error_stat": false, 00:15:22.093 "rdma_srq_size": 0, 00:15:22.093 "io_path_stat": false, 00:15:22.093 "allow_accel_sequence": false, 00:15:22.093 "rdma_max_cq_size": 0, 00:15:22.093 "rdma_cm_event_timeout_ms": 0, 00:15:22.093 "dhchap_digests": [ 00:15:22.093 "sha256", 00:15:22.093 "sha384", 00:15:22.093 "sha512" 00:15:22.093 ], 00:15:22.093 "dhchap_dhgroups": [ 00:15:22.093 "null", 00:15:22.093 "ffdhe2048", 00:15:22.093 "ffdhe3072", 00:15:22.093 "ffdhe4096", 00:15:22.093 "ffdhe6144", 00:15:22.094 "ffdhe8192" 00:15:22.094 ] 00:15:22.094 } 00:15:22.094 }, 00:15:22.094 { 00:15:22.094 "method": "bdev_nvme_attach_controller", 00:15:22.094 "params": { 00:15:22.094 "name": "TLSTEST", 00:15:22.094 "trtype": "TCP", 00:15:22.094 "adrfam": "IPv4", 00:15:22.094 "traddr": "10.0.0.3", 00:15:22.094 "trsvcid": "4420", 00:15:22.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.094 "prchk_reftag": false, 00:15:22.094 "prchk_guard": false, 00:15:22.094 "ctrlr_loss_timeout_sec": 0, 00:15:22.094 "reconnect_delay_sec": 0, 00:15:22.094 "fast_io_fail_timeout_sec": 0, 00:15:22.094 "psk": "key0", 00:15:22.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:22.094 "hdgst": false, 00:15:22.094 "ddgst": false, 00:15:22.094 "multipath": "multipath" 00:15:22.094 } 00:15:22.094 }, 00:15:22.094 { 00:15:22.094 "method": "bdev_nvme_set_hotplug", 00:15:22.094 "params": { 00:15:22.094 "period_us": 100000, 00:15:22.094 "enable": false 00:15:22.094 } 00:15:22.094 }, 00:15:22.094 { 00:15:22.094 "method": "bdev_wait_for_examine" 00:15:22.094 } 00:15:22.094 ] 00:15:22.094 }, 00:15:22.094 { 00:15:22.094 "subsystem": "nbd", 00:15:22.094 "config": [] 00:15:22.094 } 00:15:22.094 ] 00:15:22.094 }' 00:15:22.094 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:22.094 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:22.094 [2024-11-15 10:38:47.407622] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:22.094 [2024-11-15 10:38:47.408088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72305 ] 00:15:22.094 [2024-11-15 10:38:47.550360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.351 [2024-11-15 10:38:47.615194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.351 [2024-11-15 10:38:47.752191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:22.351 [2024-11-15 10:38:47.803695] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:23.285 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:23.285 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:23.285 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:23.285 Running I/O for 10 seconds... 00:15:25.152 4009.00 IOPS, 15.66 MiB/s [2024-11-15T10:38:51.583Z] 4052.00 IOPS, 15.83 MiB/s [2024-11-15T10:38:52.970Z] 4071.33 IOPS, 15.90 MiB/s [2024-11-15T10:38:53.902Z] 4081.00 IOPS, 15.94 MiB/s [2024-11-15T10:38:54.836Z] 4072.60 IOPS, 15.91 MiB/s [2024-11-15T10:38:55.770Z] 4061.17 IOPS, 15.86 MiB/s [2024-11-15T10:38:56.709Z] 4061.00 IOPS, 15.86 MiB/s [2024-11-15T10:38:57.661Z] 4064.75 IOPS, 15.88 MiB/s [2024-11-15T10:38:58.596Z] 4056.11 IOPS, 15.84 MiB/s [2024-11-15T10:38:58.596Z] 4052.60 IOPS, 15.83 MiB/s 00:15:33.098 Latency(us) 00:15:33.098 [2024-11-15T10:38:58.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.098 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:33.098 Verification LBA range: start 0x0 length 0x2000 00:15:33.098 TLSTESTn1 : 10.01 4059.12 15.86 0.00 0.00 31477.37 4736.47 33363.78 00:15:33.098 [2024-11-15T10:38:58.596Z] =================================================================================================================== 00:15:33.098 [2024-11-15T10:38:58.596Z] Total : 4059.12 15.86 0.00 0.00 31477.37 4736.47 33363.78 00:15:33.098 { 00:15:33.098 "results": [ 00:15:33.098 { 00:15:33.098 "job": "TLSTESTn1", 00:15:33.098 "core_mask": "0x4", 00:15:33.098 "workload": "verify", 00:15:33.098 "status": "finished", 00:15:33.098 "verify_range": { 00:15:33.098 "start": 0, 00:15:33.098 "length": 8192 00:15:33.098 }, 00:15:33.098 "queue_depth": 128, 00:15:33.098 "io_size": 4096, 00:15:33.098 "runtime": 10.014724, 00:15:33.098 "iops": 4059.123346784195, 00:15:33.098 "mibps": 15.855950573375761, 00:15:33.098 "io_failed": 0, 00:15:33.098 "io_timeout": 0, 00:15:33.098 "avg_latency_us": 31477.370836902144, 00:15:33.098 "min_latency_us": 4736.465454545454, 00:15:33.098 "max_latency_us": 33363.781818181815 00:15:33.098 } 00:15:33.098 ], 00:15:33.098 "core_count": 1 00:15:33.098 } 00:15:33.098 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:33.098 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72305 00:15:33.098 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72305 ']' 00:15:33.098 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72305 00:15:33.098 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:33.098 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:33.098 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72305 00:15:33.357 killing process with pid 72305 00:15:33.357 Received shutdown signal, test time was about 10.000000 seconds 00:15:33.357 00:15:33.357 Latency(us) 00:15:33.357 [2024-11-15T10:38:58.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.357 [2024-11-15T10:38:58.855Z] =================================================================================================================== 00:15:33.357 [2024-11-15T10:38:58.855Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72305' 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72305 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72305 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72273 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72273 ']' 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72273 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72273 00:15:33.357 killing process with pid 72273 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72273' 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72273 00:15:33.357 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72273 00:15:33.616 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:33.616 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:33.616 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:33.616 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.616 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72438 00:15:33.616 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:33.616 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72438 00:15:33.616 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72438 ']' 00:15:33.616 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.616 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:33.616 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.616 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:33.616 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.616 [2024-11-15 10:38:59.107278] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:33.616 [2024-11-15 10:38:59.107804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.873 [2024-11-15 10:38:59.257173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.874 [2024-11-15 10:38:59.313980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.874 [2024-11-15 10:38:59.314278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.874 [2024-11-15 10:38:59.314303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.874 [2024-11-15 10:38:59.314316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.874 [2024-11-15 10:38:59.314325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.874 [2024-11-15 10:38:59.314794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.131 [2024-11-15 10:38:59.372162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.131 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:34.131 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:34.132 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:34.132 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:34.132 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.132 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.132 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.4e8bETFlTP 00:15:34.132 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4e8bETFlTP 00:15:34.132 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:34.390 [2024-11-15 10:38:59.794223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.390 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:34.648 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:34.906 [2024-11-15 10:39:00.374372] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:34.906 [2024-11-15 10:39:00.374843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:34.906 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:35.164 malloc0 00:15:35.164 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:35.730 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4e8bETFlTP 00:15:35.730 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:36.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:36.297 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72492 00:15:36.297 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:36.297 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:36.297 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72492 /var/tmp/bdevperf.sock 00:15:36.297 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72492 ']' 00:15:36.297 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:36.297 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:36.297 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:36.297 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:36.297 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.297 [2024-11-15 10:39:01.600885] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:36.297 [2024-11-15 10:39:01.601248] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72492 ] 00:15:36.297 [2024-11-15 10:39:01.750340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.556 [2024-11-15 10:39:01.813835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.556 [2024-11-15 10:39:01.870665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:36.556 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:36.556 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:36.556 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4e8bETFlTP 00:15:36.814 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:37.072 [2024-11-15 10:39:02.419262] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:37.072 nvme0n1 00:15:37.072 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:37.330 Running I/O for 1 seconds... 00:15:38.265 3960.00 IOPS, 15.47 MiB/s 00:15:38.265 Latency(us) 00:15:38.265 [2024-11-15T10:39:03.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.265 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:38.265 Verification LBA range: start 0x0 length 0x2000 00:15:38.265 nvme0n1 : 1.03 3965.71 15.49 0.00 0.00 31901.46 7983.48 20375.74 00:15:38.265 [2024-11-15T10:39:03.763Z] =================================================================================================================== 00:15:38.265 [2024-11-15T10:39:03.763Z] Total : 3965.71 15.49 0.00 0.00 31901.46 7983.48 20375.74 00:15:38.265 { 00:15:38.265 "results": [ 00:15:38.265 { 00:15:38.265 "job": "nvme0n1", 00:15:38.265 "core_mask": "0x2", 00:15:38.265 "workload": "verify", 00:15:38.265 "status": "finished", 00:15:38.265 "verify_range": { 00:15:38.265 "start": 0, 00:15:38.265 "length": 8192 00:15:38.265 }, 00:15:38.265 "queue_depth": 128, 00:15:38.265 "io_size": 4096, 00:15:38.265 "runtime": 1.030837, 00:15:38.265 "iops": 3965.709418656878, 00:15:38.265 "mibps": 15.49105241662843, 00:15:38.265 "io_failed": 0, 00:15:38.265 "io_timeout": 0, 00:15:38.265 "avg_latency_us": 31901.463910336235, 00:15:38.265 "min_latency_us": 7983.476363636363, 00:15:38.265 "max_latency_us": 20375.738181818182 00:15:38.265 } 00:15:38.265 ], 00:15:38.265 "core_count": 1 00:15:38.265 } 00:15:38.265 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72492 00:15:38.265 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72492 ']' 00:15:38.265 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72492 00:15:38.265 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:38.265 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:38.265 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72492 00:15:38.265 killing process with pid 72492 00:15:38.265 Received shutdown signal, test time was about 1.000000 seconds 00:15:38.265 00:15:38.265 Latency(us) 00:15:38.265 [2024-11-15T10:39:03.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.265 [2024-11-15T10:39:03.763Z] =================================================================================================================== 00:15:38.265 [2024-11-15T10:39:03.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:38.265 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:38.265 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:38.265 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72492' 00:15:38.265 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72492 00:15:38.265 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72492 00:15:38.524 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72438 00:15:38.524 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72438 ']' 00:15:38.524 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72438 00:15:38.524 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:38.524 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:38.524 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72438 00:15:38.524 killing process with pid 72438 00:15:38.524 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:38.524 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:38.524 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72438' 00:15:38.524 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72438 00:15:38.524 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72438 00:15:38.783 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:38.783 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:38.783 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:38.783 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.783 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72535 00:15:38.783 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72535 00:15:38.783 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:38.783 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72535 ']' 00:15:38.783 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.783 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:38.783 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.783 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:38.783 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.783 [2024-11-15 10:39:04.230329] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:38.783 [2024-11-15 10:39:04.230424] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.041 [2024-11-15 10:39:04.374706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.041 [2024-11-15 10:39:04.436039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.041 [2024-11-15 10:39:04.436293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.041 [2024-11-15 10:39:04.436314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.041 [2024-11-15 10:39:04.436322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.041 [2024-11-15 10:39:04.436330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.041 [2024-11-15 10:39:04.436766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.041 [2024-11-15 10:39:04.490710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.974 [2024-11-15 10:39:05.287546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.974 malloc0 00:15:39.974 [2024-11-15 10:39:05.318367] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:39.974 [2024-11-15 10:39:05.318611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:39.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72573 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72573 /var/tmp/bdevperf.sock 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72573 ']' 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:39.974 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.974 [2024-11-15 10:39:05.397963] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:39.974 [2024-11-15 10:39:05.398231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72573 ] 00:15:40.232 [2024-11-15 10:39:05.539197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.232 [2024-11-15 10:39:05.596548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.232 [2024-11-15 10:39:05.649344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:40.232 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:40.232 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:40.232 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4e8bETFlTP 00:15:40.799 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:40.799 [2024-11-15 10:39:06.269916] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:41.056 nvme0n1 00:15:41.056 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:41.056 Running I/O for 1 seconds... 00:15:42.431 3842.00 IOPS, 15.01 MiB/s 00:15:42.431 Latency(us) 00:15:42.431 [2024-11-15T10:39:07.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.431 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:42.431 Verification LBA range: start 0x0 length 0x2000 00:15:42.431 nvme0n1 : 1.02 3901.92 15.24 0.00 0.00 32435.96 325.82 19541.64 00:15:42.431 [2024-11-15T10:39:07.929Z] =================================================================================================================== 00:15:42.431 [2024-11-15T10:39:07.929Z] Total : 3901.92 15.24 0.00 0.00 32435.96 325.82 19541.64 00:15:42.431 { 00:15:42.431 "results": [ 00:15:42.431 { 00:15:42.431 "job": "nvme0n1", 00:15:42.431 "core_mask": "0x2", 00:15:42.431 "workload": "verify", 00:15:42.432 "status": "finished", 00:15:42.432 "verify_range": { 00:15:42.432 "start": 0, 00:15:42.432 "length": 8192 00:15:42.432 }, 00:15:42.432 "queue_depth": 128, 00:15:42.432 "io_size": 4096, 00:15:42.432 "runtime": 1.017705, 00:15:42.432 "iops": 3901.9165671781116, 00:15:42.432 "mibps": 15.241861590539498, 00:15:42.432 "io_failed": 0, 00:15:42.432 "io_timeout": 0, 00:15:42.432 "avg_latency_us": 32435.964114374674, 00:15:42.432 "min_latency_us": 325.8181818181818, 00:15:42.432 "max_latency_us": 19541.643636363635 00:15:42.432 } 00:15:42.432 ], 00:15:42.432 "core_count": 1 00:15:42.432 } 00:15:42.432 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:42.432 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.432 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:42.432 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.432 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:42.432 "subsystems": [ 00:15:42.432 { 00:15:42.432 "subsystem": "keyring", 00:15:42.432 "config": [ 00:15:42.432 { 00:15:42.432 "method": "keyring_file_add_key", 00:15:42.432 "params": { 00:15:42.432 "name": "key0", 00:15:42.432 "path": "/tmp/tmp.4e8bETFlTP" 00:15:42.432 } 00:15:42.432 } 00:15:42.432 ] 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "subsystem": "iobuf", 00:15:42.432 "config": [ 00:15:42.432 { 00:15:42.432 "method": "iobuf_set_options", 00:15:42.432 "params": { 00:15:42.432 "small_pool_count": 8192, 00:15:42.432 "large_pool_count": 1024, 00:15:42.432 "small_bufsize": 8192, 00:15:42.432 "large_bufsize": 135168, 00:15:42.432 "enable_numa": false 00:15:42.432 } 00:15:42.432 } 00:15:42.432 ] 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "subsystem": "sock", 00:15:42.432 "config": [ 00:15:42.432 { 00:15:42.432 "method": "sock_set_default_impl", 00:15:42.432 "params": { 00:15:42.432 "impl_name": "uring" 00:15:42.432 } 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "method": "sock_impl_set_options", 00:15:42.432 "params": { 00:15:42.432 "impl_name": "ssl", 00:15:42.432 "recv_buf_size": 4096, 00:15:42.432 "send_buf_size": 4096, 00:15:42.432 "enable_recv_pipe": true, 00:15:42.432 "enable_quickack": false, 00:15:42.432 "enable_placement_id": 0, 00:15:42.432 "enable_zerocopy_send_server": true, 00:15:42.432 "enable_zerocopy_send_client": false, 00:15:42.432 "zerocopy_threshold": 0, 00:15:42.432 "tls_version": 0, 00:15:42.432 "enable_ktls": false 00:15:42.432 } 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "method": "sock_impl_set_options", 00:15:42.432 "params": { 00:15:42.432 "impl_name": "posix", 00:15:42.432 "recv_buf_size": 2097152, 00:15:42.432 "send_buf_size": 2097152, 00:15:42.432 "enable_recv_pipe": true, 00:15:42.432 "enable_quickack": false, 00:15:42.432 "enable_placement_id": 0, 00:15:42.432 "enable_zerocopy_send_server": true, 00:15:42.432 "enable_zerocopy_send_client": false, 00:15:42.432 "zerocopy_threshold": 0, 00:15:42.432 "tls_version": 0, 00:15:42.432 "enable_ktls": false 00:15:42.432 } 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "method": "sock_impl_set_options", 00:15:42.432 "params": { 00:15:42.432 "impl_name": "uring", 00:15:42.432 "recv_buf_size": 2097152, 00:15:42.432 "send_buf_size": 2097152, 00:15:42.432 "enable_recv_pipe": true, 00:15:42.432 "enable_quickack": false, 00:15:42.432 "enable_placement_id": 0, 00:15:42.432 "enable_zerocopy_send_server": false, 00:15:42.432 "enable_zerocopy_send_client": false, 00:15:42.432 "zerocopy_threshold": 0, 00:15:42.432 "tls_version": 0, 00:15:42.432 "enable_ktls": false 00:15:42.432 } 00:15:42.432 } 00:15:42.432 ] 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "subsystem": "vmd", 00:15:42.432 "config": [] 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "subsystem": "accel", 00:15:42.432 "config": [ 00:15:42.432 { 00:15:42.432 "method": "accel_set_options", 00:15:42.432 "params": { 00:15:42.432 "small_cache_size": 128, 00:15:42.432 "large_cache_size": 16, 00:15:42.432 "task_count": 2048, 00:15:42.432 "sequence_count": 2048, 00:15:42.432 "buf_count": 2048 00:15:42.432 } 00:15:42.432 } 00:15:42.432 ] 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "subsystem": "bdev", 00:15:42.432 "config": [ 00:15:42.432 { 00:15:42.432 "method": "bdev_set_options", 00:15:42.432 "params": { 00:15:42.432 "bdev_io_pool_size": 65535, 00:15:42.432 "bdev_io_cache_size": 256, 00:15:42.432 "bdev_auto_examine": true, 00:15:42.432 "iobuf_small_cache_size": 128, 00:15:42.432 "iobuf_large_cache_size": 16 00:15:42.432 } 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "method": "bdev_raid_set_options", 00:15:42.432 "params": { 00:15:42.432 "process_window_size_kb": 1024, 00:15:42.432 "process_max_bandwidth_mb_sec": 0 00:15:42.432 } 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "method": "bdev_iscsi_set_options", 00:15:42.432 "params": { 00:15:42.432 "timeout_sec": 30 00:15:42.432 } 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "method": "bdev_nvme_set_options", 00:15:42.432 "params": { 00:15:42.432 "action_on_timeout": "none", 00:15:42.432 "timeout_us": 0, 00:15:42.432 "timeout_admin_us": 0, 00:15:42.432 "keep_alive_timeout_ms": 10000, 00:15:42.432 "arbitration_burst": 0, 00:15:42.432 "low_priority_weight": 0, 00:15:42.432 "medium_priority_weight": 0, 00:15:42.432 "high_priority_weight": 0, 00:15:42.432 "nvme_adminq_poll_period_us": 10000, 00:15:42.432 "nvme_ioq_poll_period_us": 0, 00:15:42.432 "io_queue_requests": 0, 00:15:42.432 "delay_cmd_submit": true, 00:15:42.432 "transport_retry_count": 4, 00:15:42.432 "bdev_retry_count": 3, 00:15:42.432 "transport_ack_timeout": 0, 00:15:42.432 "ctrlr_loss_timeout_sec": 0, 00:15:42.432 "reconnect_delay_sec": 0, 00:15:42.432 "fast_io_fail_timeout_sec": 0, 00:15:42.432 "disable_auto_failback": false, 00:15:42.432 "generate_uuids": false, 00:15:42.432 "transport_tos": 0, 00:15:42.432 "nvme_error_stat": false, 00:15:42.432 "rdma_srq_size": 0, 00:15:42.432 "io_path_stat": false, 00:15:42.432 "allow_accel_sequence": false, 00:15:42.432 "rdma_max_cq_size": 0, 00:15:42.432 "rdma_cm_event_timeout_ms": 0, 00:15:42.432 "dhchap_digests": [ 00:15:42.432 "sha256", 00:15:42.432 "sha384", 00:15:42.432 "sha512" 00:15:42.432 ], 00:15:42.432 "dhchap_dhgroups": [ 00:15:42.432 "null", 00:15:42.432 "ffdhe2048", 00:15:42.432 "ffdhe3072", 00:15:42.432 "ffdhe4096", 00:15:42.432 "ffdhe6144", 00:15:42.432 "ffdhe8192" 00:15:42.432 ] 00:15:42.432 } 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "method": "bdev_nvme_set_hotplug", 00:15:42.432 "params": { 00:15:42.432 "period_us": 100000, 00:15:42.432 "enable": false 00:15:42.432 } 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "method": "bdev_malloc_create", 00:15:42.432 "params": { 00:15:42.432 "name": "malloc0", 00:15:42.432 "num_blocks": 8192, 00:15:42.432 "block_size": 4096, 00:15:42.432 "physical_block_size": 4096, 00:15:42.432 "uuid": "fa9c4fe2-52ef-484b-85b6-918b177615b4", 00:15:42.432 "optimal_io_boundary": 0, 00:15:42.432 "md_size": 0, 00:15:42.432 "dif_type": 0, 00:15:42.432 "dif_is_head_of_md": false, 00:15:42.432 "dif_pi_format": 0 00:15:42.432 } 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "method": "bdev_wait_for_examine" 00:15:42.432 } 00:15:42.432 ] 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "subsystem": "nbd", 00:15:42.432 "config": [] 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "subsystem": "scheduler", 00:15:42.432 "config": [ 00:15:42.432 { 00:15:42.432 "method": "framework_set_scheduler", 00:15:42.432 "params": { 00:15:42.432 "name": "static" 00:15:42.432 } 00:15:42.432 } 00:15:42.432 ] 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "subsystem": "nvmf", 00:15:42.432 "config": [ 00:15:42.432 { 00:15:42.432 "method": "nvmf_set_config", 00:15:42.432 "params": { 00:15:42.432 "discovery_filter": "match_any", 00:15:42.432 "admin_cmd_passthru": { 00:15:42.432 "identify_ctrlr": false 00:15:42.432 }, 00:15:42.432 "dhchap_digests": [ 00:15:42.432 "sha256", 00:15:42.432 "sha384", 00:15:42.432 "sha512" 00:15:42.432 ], 00:15:42.432 "dhchap_dhgroups": [ 00:15:42.432 "null", 00:15:42.432 "ffdhe2048", 00:15:42.432 "ffdhe3072", 00:15:42.432 "ffdhe4096", 00:15:42.432 "ffdhe6144", 00:15:42.432 "ffdhe8192" 00:15:42.432 ] 00:15:42.432 } 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "method": "nvmf_set_max_subsystems", 00:15:42.432 "params": { 00:15:42.432 "max_subsystems": 1024 00:15:42.432 } 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "method": "nvmf_set_crdt", 00:15:42.432 "params": { 00:15:42.432 "crdt1": 0, 00:15:42.432 "crdt2": 0, 00:15:42.432 "crdt3": 0 00:15:42.432 } 00:15:42.432 }, 00:15:42.432 { 00:15:42.432 "method": "nvmf_create_transport", 00:15:42.432 "params": { 00:15:42.432 "trtype": "TCP", 00:15:42.432 "max_queue_depth": 128, 00:15:42.432 "max_io_qpairs_per_ctrlr": 127, 00:15:42.432 "in_capsule_data_size": 4096, 00:15:42.433 "max_io_size": 131072, 00:15:42.433 "io_unit_size": 131072, 00:15:42.433 "max_aq_depth": 128, 00:15:42.433 "num_shared_buffers": 511, 00:15:42.433 "buf_cache_size": 4294967295, 00:15:42.433 "dif_insert_or_strip": false, 00:15:42.433 "zcopy": false, 00:15:42.433 "c2h_success": false, 00:15:42.433 "sock_priority": 0, 00:15:42.433 "abort_timeout_sec": 1, 00:15:42.433 "ack_timeout": 0, 00:15:42.433 "data_wr_pool_size": 0 00:15:42.433 } 00:15:42.433 }, 00:15:42.433 { 00:15:42.433 "method": "nvmf_create_subsystem", 00:15:42.433 "params": { 00:15:42.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.433 "allow_any_host": false, 00:15:42.433 "serial_number": "00000000000000000000", 00:15:42.433 "model_number": "SPDK bdev Controller", 00:15:42.433 "max_namespaces": 32, 00:15:42.433 "min_cntlid": 1, 00:15:42.433 "max_cntlid": 65519, 00:15:42.433 "ana_reporting": false 00:15:42.433 } 00:15:42.433 }, 00:15:42.433 { 00:15:42.433 "method": "nvmf_subsystem_add_host", 00:15:42.433 "params": { 00:15:42.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.433 "host": "nqn.2016-06.io.spdk:host1", 00:15:42.433 "psk": "key0" 00:15:42.433 } 00:15:42.433 }, 00:15:42.433 { 00:15:42.433 "method": "nvmf_subsystem_add_ns", 00:15:42.433 "params": { 00:15:42.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.433 "namespace": { 00:15:42.433 "nsid": 1, 00:15:42.433 "bdev_name": "malloc0", 00:15:42.433 "nguid": "FA9C4FE252EF484B85B6918B177615B4", 00:15:42.433 "uuid": "fa9c4fe2-52ef-484b-85b6-918b177615b4", 00:15:42.433 "no_auto_visible": false 00:15:42.433 } 00:15:42.433 } 00:15:42.433 }, 00:15:42.433 { 00:15:42.433 "method": "nvmf_subsystem_add_listener", 00:15:42.433 "params": { 00:15:42.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.433 "listen_address": { 00:15:42.433 "trtype": "TCP", 00:15:42.433 "adrfam": "IPv4", 00:15:42.433 "traddr": "10.0.0.3", 00:15:42.433 "trsvcid": "4420" 00:15:42.433 }, 00:15:42.433 "secure_channel": false, 00:15:42.433 "sock_impl": "ssl" 00:15:42.433 } 00:15:42.433 } 00:15:42.433 ] 00:15:42.433 } 00:15:42.433 ] 00:15:42.433 }' 00:15:42.433 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:42.691 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:42.691 "subsystems": [ 00:15:42.691 { 00:15:42.691 "subsystem": "keyring", 00:15:42.691 "config": [ 00:15:42.691 { 00:15:42.691 "method": "keyring_file_add_key", 00:15:42.691 "params": { 00:15:42.691 "name": "key0", 00:15:42.691 "path": "/tmp/tmp.4e8bETFlTP" 00:15:42.691 } 00:15:42.691 } 00:15:42.691 ] 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "subsystem": "iobuf", 00:15:42.691 "config": [ 00:15:42.691 { 00:15:42.691 "method": "iobuf_set_options", 00:15:42.691 "params": { 00:15:42.691 "small_pool_count": 8192, 00:15:42.691 "large_pool_count": 1024, 00:15:42.691 "small_bufsize": 8192, 00:15:42.691 "large_bufsize": 135168, 00:15:42.691 "enable_numa": false 00:15:42.691 } 00:15:42.691 } 00:15:42.691 ] 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "subsystem": "sock", 00:15:42.691 "config": [ 00:15:42.691 { 00:15:42.691 "method": "sock_set_default_impl", 00:15:42.691 "params": { 00:15:42.691 "impl_name": "uring" 00:15:42.691 } 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "method": "sock_impl_set_options", 00:15:42.691 "params": { 00:15:42.691 "impl_name": "ssl", 00:15:42.691 "recv_buf_size": 4096, 00:15:42.691 "send_buf_size": 4096, 00:15:42.691 "enable_recv_pipe": true, 00:15:42.691 "enable_quickack": false, 00:15:42.691 "enable_placement_id": 0, 00:15:42.691 "enable_zerocopy_send_server": true, 00:15:42.691 "enable_zerocopy_send_client": false, 00:15:42.691 "zerocopy_threshold": 0, 00:15:42.691 "tls_version": 0, 00:15:42.691 "enable_ktls": false 00:15:42.691 } 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "method": "sock_impl_set_options", 00:15:42.691 "params": { 00:15:42.691 "impl_name": "posix", 00:15:42.691 "recv_buf_size": 2097152, 00:15:42.691 "send_buf_size": 2097152, 00:15:42.691 "enable_recv_pipe": true, 00:15:42.691 "enable_quickack": false, 00:15:42.691 "enable_placement_id": 0, 00:15:42.691 "enable_zerocopy_send_server": true, 00:15:42.691 "enable_zerocopy_send_client": false, 00:15:42.691 "zerocopy_threshold": 0, 00:15:42.691 "tls_version": 0, 00:15:42.691 "enable_ktls": false 00:15:42.691 } 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "method": "sock_impl_set_options", 00:15:42.691 "params": { 00:15:42.691 "impl_name": "uring", 00:15:42.691 "recv_buf_size": 2097152, 00:15:42.691 "send_buf_size": 2097152, 00:15:42.691 "enable_recv_pipe": true, 00:15:42.691 "enable_quickack": false, 00:15:42.691 "enable_placement_id": 0, 00:15:42.691 "enable_zerocopy_send_server": false, 00:15:42.691 "enable_zerocopy_send_client": false, 00:15:42.691 "zerocopy_threshold": 0, 00:15:42.691 "tls_version": 0, 00:15:42.691 "enable_ktls": false 00:15:42.691 } 00:15:42.691 } 00:15:42.691 ] 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "subsystem": "vmd", 00:15:42.691 "config": [] 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "subsystem": "accel", 00:15:42.691 "config": [ 00:15:42.691 { 00:15:42.691 "method": "accel_set_options", 00:15:42.691 "params": { 00:15:42.691 "small_cache_size": 128, 00:15:42.691 "large_cache_size": 16, 00:15:42.691 "task_count": 2048, 00:15:42.691 "sequence_count": 2048, 00:15:42.691 "buf_count": 2048 00:15:42.691 } 00:15:42.691 } 00:15:42.691 ] 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "subsystem": "bdev", 00:15:42.691 "config": [ 00:15:42.691 { 00:15:42.691 "method": "bdev_set_options", 00:15:42.691 "params": { 00:15:42.691 "bdev_io_pool_size": 65535, 00:15:42.691 "bdev_io_cache_size": 256, 00:15:42.691 "bdev_auto_examine": true, 00:15:42.691 "iobuf_small_cache_size": 128, 00:15:42.691 "iobuf_large_cache_size": 16 00:15:42.691 } 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "method": "bdev_raid_set_options", 00:15:42.691 "params": { 00:15:42.691 "process_window_size_kb": 1024, 00:15:42.691 "process_max_bandwidth_mb_sec": 0 00:15:42.691 } 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "method": "bdev_iscsi_set_options", 00:15:42.691 "params": { 00:15:42.691 "timeout_sec": 30 00:15:42.691 } 00:15:42.691 }, 00:15:42.691 { 00:15:42.691 "method": "bdev_nvme_set_options", 00:15:42.691 "params": { 00:15:42.691 "action_on_timeout": "none", 00:15:42.691 "timeout_us": 0, 00:15:42.691 "timeout_admin_us": 0, 00:15:42.691 "keep_alive_timeout_ms": 10000, 00:15:42.691 "arbitration_burst": 0, 00:15:42.691 "low_priority_weight": 0, 00:15:42.691 "medium_priority_weight": 0, 00:15:42.691 "high_priority_weight": 0, 00:15:42.691 "nvme_adminq_poll_period_us": 10000, 00:15:42.691 "nvme_ioq_poll_period_us": 0, 00:15:42.691 "io_queue_requests": 512, 00:15:42.691 "delay_cmd_submit": true, 00:15:42.691 "transport_retry_count": 4, 00:15:42.692 "bdev_retry_count": 3, 00:15:42.692 "transport_ack_timeout": 0, 00:15:42.692 "ctrlr_loss_timeout_sec": 0, 00:15:42.692 "reconnect_delay_sec": 0, 00:15:42.692 "fast_io_fail_timeout_sec": 0, 00:15:42.692 "disable_auto_failback": false, 00:15:42.692 "generate_uuids": false, 00:15:42.692 "transport_tos": 0, 00:15:42.692 "nvme_error_stat": false, 00:15:42.692 "rdma_srq_size": 0, 00:15:42.692 "io_path_stat": false, 00:15:42.692 "allow_accel_sequence": false, 00:15:42.692 "rdma_max_cq_size": 0, 00:15:42.692 "rdma_cm_event_timeout_ms": 0, 00:15:42.692 "dhchap_digests": [ 00:15:42.692 "sha256", 00:15:42.692 "sha384", 00:15:42.692 "sha512" 00:15:42.692 ], 00:15:42.692 "dhchap_dhgroups": [ 00:15:42.692 "null", 00:15:42.692 "ffdhe2048", 00:15:42.692 "ffdhe3072", 00:15:42.692 "ffdhe4096", 00:15:42.692 "ffdhe6144", 00:15:42.692 "ffdhe8192" 00:15:42.692 ] 00:15:42.692 } 00:15:42.692 }, 00:15:42.692 { 00:15:42.692 "method": "bdev_nvme_attach_controller", 00:15:42.692 "params": { 00:15:42.692 "name": "nvme0", 00:15:42.692 "trtype": "TCP", 00:15:42.692 "adrfam": "IPv4", 00:15:42.692 "traddr": "10.0.0.3", 00:15:42.692 "trsvcid": "4420", 00:15:42.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.692 "prchk_reftag": false, 00:15:42.692 "prchk_guard": false, 00:15:42.692 "ctrlr_loss_timeout_sec": 0, 00:15:42.692 "reconnect_delay_sec": 0, 00:15:42.692 "fast_io_fail_timeout_sec": 0, 00:15:42.692 "psk": "key0", 00:15:42.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:42.692 "hdgst": false, 00:15:42.692 "ddgst": false, 00:15:42.692 "multipath": "multipath" 00:15:42.692 } 00:15:42.692 }, 00:15:42.692 { 00:15:42.692 "method": "bdev_nvme_set_hotplug", 00:15:42.692 "params": { 00:15:42.692 "period_us": 100000, 00:15:42.692 "enable": false 00:15:42.692 } 00:15:42.692 }, 00:15:42.692 { 00:15:42.692 "method": "bdev_enable_histogram", 00:15:42.692 "params": { 00:15:42.692 "name": "nvme0n1", 00:15:42.692 "enable": true 00:15:42.692 } 00:15:42.692 }, 00:15:42.692 { 00:15:42.692 "method": "bdev_wait_for_examine" 00:15:42.692 } 00:15:42.692 ] 00:15:42.692 }, 00:15:42.692 { 00:15:42.692 "subsystem": "nbd", 00:15:42.692 "config": [] 00:15:42.692 } 00:15:42.692 ] 00:15:42.692 }' 00:15:42.692 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72573 00:15:42.692 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72573 ']' 00:15:42.692 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72573 00:15:42.692 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:42.692 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:42.692 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72573 00:15:42.692 killing process with pid 72573 00:15:42.692 Received shutdown signal, test time was about 1.000000 seconds 00:15:42.692 00:15:42.692 Latency(us) 00:15:42.692 [2024-11-15T10:39:08.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.692 [2024-11-15T10:39:08.190Z] =================================================================================================================== 00:15:42.692 [2024-11-15T10:39:08.190Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:42.692 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:42.692 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:42.692 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72573' 00:15:42.692 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72573 00:15:42.692 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72573 00:15:42.950 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72535 00:15:42.950 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72535 ']' 00:15:42.950 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72535 00:15:42.950 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:42.950 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:42.950 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72535 00:15:42.950 killing process with pid 72535 00:15:42.950 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:42.950 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:42.950 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72535' 00:15:42.950 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72535 00:15:42.950 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72535 00:15:43.209 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:43.209 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:43.209 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:43.209 "subsystems": [ 00:15:43.209 { 00:15:43.209 "subsystem": "keyring", 00:15:43.209 "config": [ 00:15:43.209 { 00:15:43.209 "method": "keyring_file_add_key", 00:15:43.209 "params": { 00:15:43.209 "name": "key0", 00:15:43.209 "path": "/tmp/tmp.4e8bETFlTP" 00:15:43.209 } 00:15:43.209 } 00:15:43.209 ] 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "subsystem": "iobuf", 00:15:43.209 "config": [ 00:15:43.209 { 00:15:43.209 "method": "iobuf_set_options", 00:15:43.209 "params": { 00:15:43.209 "small_pool_count": 8192, 00:15:43.209 "large_pool_count": 1024, 00:15:43.209 "small_bufsize": 8192, 00:15:43.209 "large_bufsize": 135168, 00:15:43.209 "enable_numa": false 00:15:43.209 } 00:15:43.209 } 00:15:43.209 ] 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "subsystem": "sock", 00:15:43.209 "config": [ 00:15:43.209 { 00:15:43.209 "method": "sock_set_default_impl", 00:15:43.209 "params": { 00:15:43.209 "impl_name": "uring" 00:15:43.209 } 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "method": "sock_impl_set_options", 00:15:43.209 "params": { 00:15:43.209 "impl_name": "ssl", 00:15:43.209 "recv_buf_size": 4096, 00:15:43.209 "send_buf_size": 4096, 00:15:43.209 "enable_recv_pipe": true, 00:15:43.209 "enable_quickack": false, 00:15:43.209 "enable_placement_id": 0, 00:15:43.209 "enable_zerocopy_send_server": true, 00:15:43.209 "enable_zerocopy_send_client": false, 00:15:43.209 "zerocopy_threshold": 0, 00:15:43.209 "tls_version": 0, 00:15:43.209 "enable_ktls": false 00:15:43.209 } 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "method": "sock_impl_set_options", 00:15:43.209 "params": { 00:15:43.209 "impl_name": "posix", 00:15:43.209 "recv_buf_size": 2097152, 00:15:43.209 "send_buf_size": 2097152, 00:15:43.209 "enable_recv_pipe": true, 00:15:43.209 "enable_quickack": false, 00:15:43.209 "enable_placement_id": 0, 00:15:43.209 "enable_zerocopy_send_server": true, 00:15:43.209 "enable_zerocopy_send_client": false, 00:15:43.209 "zerocopy_threshold": 0, 00:15:43.209 "tls_version": 0, 00:15:43.209 "enable_ktls": false 00:15:43.209 } 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "method": "sock_impl_set_options", 00:15:43.209 "params": { 00:15:43.209 "impl_name": "uring", 00:15:43.209 "recv_buf_size": 2097152, 00:15:43.209 "send_buf_size": 2097152, 00:15:43.209 "enable_recv_pipe": true, 00:15:43.209 "enable_quickack": false, 00:15:43.209 "enable_placement_id": 0, 00:15:43.209 "enable_zerocopy_send_server": false, 00:15:43.209 "enable_zerocopy_send_client": false, 00:15:43.209 "zerocopy_threshold": 0, 00:15:43.209 "tls_version": 0, 00:15:43.209 "enable_ktls": false 00:15:43.209 } 00:15:43.209 } 00:15:43.209 ] 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "subsystem": "vmd", 00:15:43.209 "config": [] 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "subsystem": "accel", 00:15:43.209 "config": [ 00:15:43.209 { 00:15:43.209 "method": "accel_set_options", 00:15:43.209 "params": { 00:15:43.209 "small_cache_size": 128, 00:15:43.209 "large_cache_size": 16, 00:15:43.209 "task_count": 2048, 00:15:43.209 "sequence_count": 2048, 00:15:43.209 "buf_count": 2048 00:15:43.209 } 00:15:43.209 } 00:15:43.209 ] 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "subsystem": "bdev", 00:15:43.209 "config": [ 00:15:43.209 { 00:15:43.209 "method": "bdev_set_options", 00:15:43.209 "params": { 00:15:43.209 "bdev_io_pool_size": 65535, 00:15:43.209 "bdev_io_cache_size": 256, 00:15:43.209 "bdev_auto_examine": true, 00:15:43.209 "iobuf_small_cache_size": 128, 00:15:43.209 "iobuf_large_cache_size": 16 00:15:43.209 } 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "method": "bdev_raid_set_options", 00:15:43.209 "params": { 00:15:43.209 "process_window_size_kb": 1024, 00:15:43.209 "process_max_bandwidth_mb_sec": 0 00:15:43.209 } 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "method": "bdev_iscsi_set_options", 00:15:43.209 "params": { 00:15:43.209 "timeout_sec": 30 00:15:43.209 } 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "method": "bdev_nvme_set_options", 00:15:43.209 "params": { 00:15:43.209 "action_on_timeout": "none", 00:15:43.209 "timeout_us": 0, 00:15:43.209 "timeout_admin_us": 0, 00:15:43.209 "keep_alive_timeout_ms": 10000, 00:15:43.209 "arbitration_burst": 0, 00:15:43.209 "low_priority_weight": 0, 00:15:43.209 "medium_priority_weight": 0, 00:15:43.209 "high_priority_weight": 0, 00:15:43.209 "nvme_adminq_poll_period_us": 10000, 00:15:43.209 "nvme_ioq_poll_period_us": 0, 00:15:43.209 "io_queue_requests": 0, 00:15:43.209 "delay_cmd_submit": true, 00:15:43.209 "transport_retry_count": 4, 00:15:43.209 "bdev_retry_count": 3, 00:15:43.209 "transport_ack_timeout": 0, 00:15:43.209 "ctrlr_loss_timeout_sec": 0, 00:15:43.209 "reconnect_delay_sec": 0, 00:15:43.209 "fast_io_fail_timeout_sec": 0, 00:15:43.209 "disable_auto_failback": false, 00:15:43.209 "generate_uuids": false, 00:15:43.209 "transport_tos": 0, 00:15:43.209 "nvme_error_stat": false, 00:15:43.209 "rdma_srq_size": 0, 00:15:43.209 "io_path_stat": false, 00:15:43.209 "allow_accel_sequence": false, 00:15:43.209 "rdma_max_cq_size": 0, 00:15:43.209 "rdma_cm_event_timeout_ms": 0, 00:15:43.209 "dhchap_digests": [ 00:15:43.209 "sha256", 00:15:43.209 "sha384", 00:15:43.209 "sha512" 00:15:43.209 ], 00:15:43.209 "dhchap_dhgroups": [ 00:15:43.209 "null", 00:15:43.209 "ffdhe2048", 00:15:43.209 "ffdhe3072", 00:15:43.209 "ffdhe4096", 00:15:43.209 "ffdhe6144", 00:15:43.209 "ffdhe8192" 00:15:43.209 ] 00:15:43.209 } 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "method": "bdev_nvme_set_hotplug", 00:15:43.209 "params": { 00:15:43.209 "period_us": 100000, 00:15:43.209 "enable": false 00:15:43.209 } 00:15:43.209 }, 00:15:43.209 { 00:15:43.209 "method": "bdev_malloc_create", 00:15:43.209 "params": { 00:15:43.209 "name": "malloc0", 00:15:43.209 "num_blocks": 8192, 00:15:43.209 "block_size": 4096, 00:15:43.209 "physical_block_size": 4096, 00:15:43.209 "uuid": "fa9c4fe2-52ef-484b-85b6-918b177615b4", 00:15:43.209 "optimal_io_boundary": 0, 00:15:43.210 "md_size": 0, 00:15:43.210 "dif_type": 0, 00:15:43.210 "dif_is_head_of_md": false, 00:15:43.210 "dif_pi_format": 0 00:15:43.210 } 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "method": "bdev_wait_for_examine" 00:15:43.210 } 00:15:43.210 ] 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "subsystem": "nbd", 00:15:43.210 "config": [] 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "subsystem": "scheduler", 00:15:43.210 "config": [ 00:15:43.210 { 00:15:43.210 "method": "framework_set_scheduler", 00:15:43.210 "params": { 00:15:43.210 "name": "static" 00:15:43.210 } 00:15:43.210 } 00:15:43.210 ] 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "subsystem": "nvmf", 00:15:43.210 "config": [ 00:15:43.210 { 00:15:43.210 "method": "nvmf_set_config", 00:15:43.210 "params": { 00:15:43.210 "discovery_filter": "match_any", 00:15:43.210 "admin_cmd_passthru": { 00:15:43.210 "identify_ctrlr": false 00:15:43.210 }, 00:15:43.210 "dhchap_digests": [ 00:15:43.210 "sha256", 00:15:43.210 "sha384", 00:15:43.210 "sha512" 00:15:43.210 ], 00:15:43.210 "dhchap_dhgroups": [ 00:15:43.210 "null", 00:15:43.210 "ffdhe2048", 00:15:43.210 "ffdhe3072", 00:15:43.210 "ffdhe4096", 00:15:43.210 "ffdhe6144", 00:15:43.210 "ffdhe8192" 00:15:43.210 ] 00:15:43.210 } 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "method": "nvmf_set_max_subsyste 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:43.210 ms", 00:15:43.210 "params": { 00:15:43.210 "max_subsystems": 1024 00:15:43.210 } 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "method": "nvmf_set_crdt", 00:15:43.210 "params": { 00:15:43.210 "crdt1": 0, 00:15:43.210 "crdt2": 0, 00:15:43.210 "crdt3": 0 00:15:43.210 } 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "method": "nvmf_create_transport", 00:15:43.210 "params": { 00:15:43.210 "trtype": "TCP", 00:15:43.210 "max_queue_depth": 128, 00:15:43.210 "max_io_qpairs_per_ctrlr": 127, 00:15:43.210 "in_capsule_data_size": 4096, 00:15:43.210 "max_io_size": 131072, 00:15:43.210 "io_unit_size": 131072, 00:15:43.210 "max_aq_depth": 128, 00:15:43.210 "num_shared_buffers": 511, 00:15:43.210 "buf_cache_size": 4294967295, 00:15:43.210 "dif_insert_or_strip": false, 00:15:43.210 "zcopy": false, 00:15:43.210 "c2h_success": false, 00:15:43.210 "sock_priority": 0, 00:15:43.210 "abort_timeout_sec": 1, 00:15:43.210 "ack_timeout": 0, 00:15:43.210 "data_wr_pool_size": 0 00:15:43.210 } 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "method": "nvmf_create_subsystem", 00:15:43.210 "params": { 00:15:43.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.210 "allow_any_host": false, 00:15:43.210 "serial_number": "00000000000000000000", 00:15:43.210 "model_number": "SPDK bdev Controller", 00:15:43.210 "max_namespaces": 32, 00:15:43.210 "min_cntlid": 1, 00:15:43.210 "max_cntlid": 65519, 00:15:43.210 "ana_reporting": false 00:15:43.210 } 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "method": "nvmf_subsystem_add_host", 00:15:43.210 "params": { 00:15:43.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.210 "host": "nqn.2016-06.io.spdk:host1", 00:15:43.210 "psk": "key0" 00:15:43.210 } 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "method": "nvmf_subsystem_add_ns", 00:15:43.210 "params": { 00:15:43.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.210 "namespace": { 00:15:43.210 "nsid": 1, 00:15:43.210 "bdev_name": "malloc0", 00:15:43.210 "nguid": "FA9C4FE252EF484B85B6918B177615B4", 00:15:43.210 "uuid": "fa9c4fe2-52ef-484b-85b6-918b177615b4", 00:15:43.210 "no_auto_visible": false 00:15:43.210 } 00:15:43.210 } 00:15:43.210 }, 00:15:43.210 { 00:15:43.210 "method": "nvmf_subsystem_add_listener", 00:15:43.210 "params": { 00:15:43.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.210 "listen_address": { 00:15:43.210 "trtype": "TCP", 00:15:43.210 "adrfam": "IPv4", 00:15:43.210 "traddr": "10.0.0.3", 00:15:43.210 "trsvcid": "4420" 00:15:43.210 }, 00:15:43.210 "secure_channel": false, 00:15:43.210 "sock_impl": "ssl" 00:15:43.210 } 00:15:43.210 } 00:15:43.210 ] 00:15:43.210 } 00:15:43.210 ] 00:15:43.210 }' 00:15:43.210 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:43.210 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72622 00:15:43.210 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:43.210 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72622 00:15:43.210 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72622 ']' 00:15:43.210 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.210 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:43.210 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.210 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:43.210 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:43.210 [2024-11-15 10:39:08.663663] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:43.210 [2024-11-15 10:39:08.663766] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.469 [2024-11-15 10:39:08.812870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.469 [2024-11-15 10:39:08.870216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.469 [2024-11-15 10:39:08.870278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.469 [2024-11-15 10:39:08.870291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.469 [2024-11-15 10:39:08.870300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.469 [2024-11-15 10:39:08.870307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.469 [2024-11-15 10:39:08.870799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.728 [2024-11-15 10:39:09.038717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:43.728 [2024-11-15 10:39:09.121550] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.728 [2024-11-15 10:39:09.153475] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:43.728 [2024-11-15 10:39:09.153710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:44.294 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:44.294 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:44.294 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:44.294 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:44.294 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:44.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:44.554 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.554 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72657 00:15:44.554 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72657 /var/tmp/bdevperf.sock 00:15:44.554 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72657 ']' 00:15:44.554 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:44.554 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:44.554 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:44.554 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:44.554 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:44.554 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:44.554 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:44.554 "subsystems": [ 00:15:44.554 { 00:15:44.555 "subsystem": "keyring", 00:15:44.555 "config": [ 00:15:44.555 { 00:15:44.555 "method": "keyring_file_add_key", 00:15:44.555 "params": { 00:15:44.555 "name": "key0", 00:15:44.555 "path": "/tmp/tmp.4e8bETFlTP" 00:15:44.555 } 00:15:44.555 } 00:15:44.555 ] 00:15:44.555 }, 00:15:44.555 { 00:15:44.555 "subsystem": "iobuf", 00:15:44.555 "config": [ 00:15:44.555 { 00:15:44.555 "method": "iobuf_set_options", 00:15:44.555 "params": { 00:15:44.555 "small_pool_count": 8192, 00:15:44.555 "large_pool_count": 1024, 00:15:44.555 "small_bufsize": 8192, 00:15:44.555 "large_bufsize": 135168, 00:15:44.555 "enable_numa": false 00:15:44.555 } 00:15:44.555 } 00:15:44.555 ] 00:15:44.555 }, 00:15:44.555 { 00:15:44.555 "subsystem": "sock", 00:15:44.555 "config": [ 00:15:44.555 { 00:15:44.555 "method": "sock_set_default_impl", 00:15:44.555 "params": { 00:15:44.555 "impl_name": "uring" 00:15:44.555 } 00:15:44.555 }, 00:15:44.555 { 00:15:44.555 "method": "sock_impl_set_options", 00:15:44.555 "params": { 00:15:44.555 "impl_name": "ssl", 00:15:44.555 "recv_buf_size": 4096, 00:15:44.555 "send_buf_size": 4096, 00:15:44.555 "enable_recv_pipe": true, 00:15:44.555 "enable_quickack": false, 00:15:44.555 "enable_placement_id": 0, 00:15:44.555 "enable_zerocopy_send_server": true, 00:15:44.555 "enable_zerocopy_send_client": false, 00:15:44.555 "zerocopy_threshold": 0, 00:15:44.555 "tls_version": 0, 00:15:44.555 "enable_ktls": false 00:15:44.555 } 00:15:44.555 }, 00:15:44.555 { 00:15:44.555 "method": "sock_impl_set_options", 00:15:44.555 "params": { 00:15:44.555 "impl_name": "posix", 00:15:44.556 "recv_buf_size": 2097152, 00:15:44.556 "send_buf_size": 2097152, 00:15:44.556 "enable_recv_pipe": true, 00:15:44.556 "enable_quickack": false, 00:15:44.556 "enable_placement_id": 0, 00:15:44.556 "enable_zerocopy_send_server": true, 00:15:44.556 "enable_zerocopy_send_client": false, 00:15:44.556 "zerocopy_threshold": 0, 00:15:44.556 "tls_version": 0, 00:15:44.556 "enable_ktls": false 00:15:44.556 } 00:15:44.556 }, 00:15:44.556 { 00:15:44.556 "method": "sock_impl_set_options", 00:15:44.556 "params": { 00:15:44.556 "impl_name": "uring", 00:15:44.556 "recv_buf_size": 2097152, 00:15:44.556 "send_buf_size": 2097152, 00:15:44.556 "enable_recv_pipe": true, 00:15:44.556 "enable_quickack": false, 00:15:44.556 "enable_placement_id": 0, 00:15:44.556 "enable_zerocopy_send_server": false, 00:15:44.556 "enable_zerocopy_send_client": false, 00:15:44.556 "zerocopy_threshold": 0, 00:15:44.556 "tls_version": 0, 00:15:44.556 "enable_ktls": false 00:15:44.556 } 00:15:44.556 } 00:15:44.556 ] 00:15:44.556 }, 00:15:44.556 { 00:15:44.556 "subsystem": "vmd", 00:15:44.556 "config": [] 00:15:44.556 }, 00:15:44.556 { 00:15:44.556 "subsystem": "accel", 00:15:44.556 "config": [ 00:15:44.556 { 00:15:44.556 "method": "accel_set_options", 00:15:44.556 "params": { 00:15:44.556 "small_cache_size": 128, 00:15:44.556 "large_cache_size": 16, 00:15:44.556 "task_count": 2048, 00:15:44.556 "sequence_count": 2048, 00:15:44.556 "buf_count": 2048 00:15:44.556 } 00:15:44.556 } 00:15:44.556 ] 00:15:44.556 }, 00:15:44.556 { 00:15:44.556 "subsystem": "bdev", 00:15:44.556 "config": [ 00:15:44.556 { 00:15:44.557 "method": "bdev_set_options", 00:15:44.557 "params": { 00:15:44.557 "bdev_io_pool_size": 65535, 00:15:44.557 "bdev_io_cache_size": 256, 00:15:44.557 "bdev_auto_examine": true, 00:15:44.557 "iobuf_small_cache_size": 128, 00:15:44.557 "iobuf_large_cache_size": 16 00:15:44.557 } 00:15:44.557 }, 00:15:44.557 { 00:15:44.557 "method": "bdev_raid_set_options", 00:15:44.557 "params": { 00:15:44.557 "process_window_size_kb": 1024, 00:15:44.557 "process_max_bandwidth_mb_sec": 0 00:15:44.557 } 00:15:44.557 }, 00:15:44.557 { 00:15:44.557 "method": "bdev_iscsi_set_options", 00:15:44.557 "params": { 00:15:44.557 "timeout_sec": 30 00:15:44.557 } 00:15:44.557 }, 00:15:44.557 { 00:15:44.557 "method": "bdev_nvme_set_options", 00:15:44.557 "params": { 00:15:44.557 "action_on_timeout": "none", 00:15:44.557 "timeout_us": 0, 00:15:44.557 "timeout_admin_us": 0, 00:15:44.557 "keep_alive_timeout_ms": 10000, 00:15:44.557 "arbitration_burst": 0, 00:15:44.557 "low_priority_weight": 0, 00:15:44.558 "medium_priority_weight": 0, 00:15:44.558 "high_priority_weight": 0, 00:15:44.558 "nvme_adminq_poll_period_us": 10000, 00:15:44.558 "nvme_ioq_poll_period_us": 0, 00:15:44.558 "io_queue_requests": 512, 00:15:44.558 "delay_cmd_submit": true, 00:15:44.558 "transport_retry_count": 4, 00:15:44.558 "bdev_retry_count": 3, 00:15:44.558 "transport_ack_timeout": 0, 00:15:44.558 "ctrlr_loss_timeout_sec": 0, 00:15:44.558 "reconnect_delay_sec": 0, 00:15:44.558 "fast_io_fail_timeout_sec": 0, 00:15:44.558 "disable_auto_failback": false, 00:15:44.558 "generate_uuids": false, 00:15:44.558 "transport_tos": 0, 00:15:44.558 "nvme_error_stat": false, 00:15:44.558 "rdma_srq_size": 0, 00:15:44.559 "io_path_stat": false, 00:15:44.559 "allow_accel_sequence": false, 00:15:44.559 "rdma_max_cq_size": 0, 00:15:44.559 "rdma_cm_event_timeout_ms": 0, 00:15:44.559 "dhchap_digests": [ 00:15:44.559 "sha256", 00:15:44.559 "sha384", 00:15:44.559 "sha512" 00:15:44.559 ], 00:15:44.559 "dhchap_dhgroups": [ 00:15:44.559 "null", 00:15:44.559 "ffdhe2048", 00:15:44.559 "ffdhe3072", 00:15:44.559 "ffdhe4096", 00:15:44.559 "ffdhe6144", 00:15:44.559 "ffdhe8192" 00:15:44.559 ] 00:15:44.559 } 00:15:44.559 }, 00:15:44.559 { 00:15:44.559 "method": "bdev_nvme_attach_controller", 00:15:44.559 "params": { 00:15:44.559 "name": "nvme0", 00:15:44.559 "trtype": "TCP", 00:15:44.559 "adrfam": "IPv4", 00:15:44.559 "traddr": "10.0.0.3", 00:15:44.559 "trsvcid": "4420", 00:15:44.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.559 "prchk_reftag": false, 00:15:44.559 "prchk_guard": false, 00:15:44.559 "ctrlr_loss_timeout_sec": 0, 00:15:44.559 "reconnect_delay_sec": 0, 00:15:44.559 "fast_io_fail_timeout_sec": 0, 00:15:44.559 "psk": "key0", 00:15:44.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:44.559 "hdgst": false, 00:15:44.559 "ddgst": false, 00:15:44.559 "multipath": "multipath" 00:15:44.559 } 00:15:44.559 }, 00:15:44.559 { 00:15:44.559 "method": "bdev_nvme_set_hotplug", 00:15:44.559 "params": { 00:15:44.559 "period_us": 100000, 00:15:44.559 "enable": false 00:15:44.559 } 00:15:44.559 }, 00:15:44.559 { 00:15:44.559 "method": "bdev_enable_histogram", 00:15:44.559 "params": { 00:15:44.559 "name": "nvme0n1", 00:15:44.560 "enable": true 00:15:44.560 } 00:15:44.560 }, 00:15:44.560 { 00:15:44.560 "method": "bdev_wait_for_examine" 00:15:44.560 } 00:15:44.560 ] 00:15:44.560 }, 00:15:44.560 { 00:15:44.560 "subsystem": "nbd", 00:15:44.560 "config": [] 00:15:44.560 } 00:15:44.560 ] 00:15:44.560 }' 00:15:44.560 [2024-11-15 10:39:09.857539] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:44.560 [2024-11-15 10:39:09.857663] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72657 ] 00:15:44.560 [2024-11-15 10:39:10.010188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.818 [2024-11-15 10:39:10.079507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.818 [2024-11-15 10:39:10.218138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:44.818 [2024-11-15 10:39:10.271952] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:45.752 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:45.752 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:45.752 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:45.752 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:45.752 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.752 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:46.009 Running I/O for 1 seconds... 00:15:46.943 3968.00 IOPS, 15.50 MiB/s 00:15:46.943 Latency(us) 00:15:46.943 [2024-11-15T10:39:12.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.943 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.943 Verification LBA range: start 0x0 length 0x2000 00:15:46.943 nvme0n1 : 1.03 3977.23 15.54 0.00 0.00 31826.05 7268.54 19422.49 00:15:46.943 [2024-11-15T10:39:12.441Z] =================================================================================================================== 00:15:46.943 [2024-11-15T10:39:12.441Z] Total : 3977.23 15.54 0.00 0.00 31826.05 7268.54 19422.49 00:15:46.943 { 00:15:46.943 "results": [ 00:15:46.943 { 00:15:46.943 "job": "nvme0n1", 00:15:46.943 "core_mask": "0x2", 00:15:46.943 "workload": "verify", 00:15:46.943 "status": "finished", 00:15:46.943 "verify_range": { 00:15:46.943 "start": 0, 00:15:46.943 "length": 8192 00:15:46.943 }, 00:15:46.943 "queue_depth": 128, 00:15:46.943 "io_size": 4096, 00:15:46.943 "runtime": 1.029863, 00:15:46.943 "iops": 3977.2280390692745, 00:15:46.943 "mibps": 15.536047027614353, 00:15:46.943 "io_failed": 0, 00:15:46.943 "io_timeout": 0, 00:15:46.943 "avg_latency_us": 31826.04727272727, 00:15:46.943 "min_latency_us": 7268.538181818182, 00:15:46.943 "max_latency_us": 19422.487272727274 00:15:46.943 } 00:15:46.943 ], 00:15:46.943 "core_count": 1 00:15:46.943 } 00:15:46.943 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:46.943 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:46.943 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:46.943 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:15:46.943 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:15:46.943 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:15:46.943 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:46.943 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:15:46.943 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:15:46.943 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:15:46.943 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:46.943 nvmf_trace.0 00:15:47.202 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:15:47.202 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72657 00:15:47.202 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72657 ']' 00:15:47.202 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72657 00:15:47.202 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:47.202 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:47.202 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72657 00:15:47.202 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:47.202 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:47.202 killing process with pid 72657 00:15:47.202 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72657' 00:15:47.202 Received shutdown signal, test time was about 1.000000 seconds 00:15:47.202 00:15:47.202 Latency(us) 00:15:47.202 [2024-11-15T10:39:12.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.202 [2024-11-15T10:39:12.700Z] =================================================================================================================== 00:15:47.202 [2024-11-15T10:39:12.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:47.202 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72657 00:15:47.202 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72657 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:47.461 rmmod nvme_tcp 00:15:47.461 rmmod nvme_fabrics 00:15:47.461 rmmod nvme_keyring 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72622 ']' 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72622 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72622 ']' 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72622 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72622 00:15:47.461 killing process with pid 72622 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72622' 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72622 00:15:47.461 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72622 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:47.719 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.7mAnlt4fbb /tmp/tmp.DQdXqdA7oQ /tmp/tmp.4e8bETFlTP 00:15:47.978 ************************************ 00:15:47.978 END TEST nvmf_tls 00:15:47.978 ************************************ 00:15:47.978 00:15:47.978 real 1m25.959s 00:15:47.978 user 2m21.263s 00:15:47.978 sys 0m27.068s 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.978 ************************************ 00:15:47.978 START TEST nvmf_fips 00:15:47.978 ************************************ 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:47.978 * Looking for test storage... 00:15:47.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:15:47.978 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:48.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.253 --rc genhtml_branch_coverage=1 00:15:48.253 --rc genhtml_function_coverage=1 00:15:48.253 --rc genhtml_legend=1 00:15:48.253 --rc geninfo_all_blocks=1 00:15:48.253 --rc geninfo_unexecuted_blocks=1 00:15:48.253 00:15:48.253 ' 00:15:48.253 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:48.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.253 --rc genhtml_branch_coverage=1 00:15:48.254 --rc genhtml_function_coverage=1 00:15:48.254 --rc genhtml_legend=1 00:15:48.254 --rc geninfo_all_blocks=1 00:15:48.254 --rc geninfo_unexecuted_blocks=1 00:15:48.254 00:15:48.254 ' 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:48.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.254 --rc genhtml_branch_coverage=1 00:15:48.254 --rc genhtml_function_coverage=1 00:15:48.254 --rc genhtml_legend=1 00:15:48.254 --rc geninfo_all_blocks=1 00:15:48.254 --rc geninfo_unexecuted_blocks=1 00:15:48.254 00:15:48.254 ' 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:48.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.254 --rc genhtml_branch_coverage=1 00:15:48.254 --rc genhtml_function_coverage=1 00:15:48.254 --rc genhtml_legend=1 00:15:48.254 --rc geninfo_all_blocks=1 00:15:48.254 --rc geninfo_unexecuted_blocks=1 00:15:48.254 00:15:48.254 ' 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.254 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:48.254 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:15:48.255 Error setting digest 00:15:48.255 4022422ED97F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:48.255 4022422ED97F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:48.255 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:48.514 Cannot find device "nvmf_init_br" 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:48.514 Cannot find device "nvmf_init_br2" 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:48.514 Cannot find device "nvmf_tgt_br" 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.514 Cannot find device "nvmf_tgt_br2" 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:48.514 Cannot find device "nvmf_init_br" 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:48.514 Cannot find device "nvmf_init_br2" 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:48.514 Cannot find device "nvmf_tgt_br" 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:48.514 Cannot find device "nvmf_tgt_br2" 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:48.514 Cannot find device "nvmf_br" 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:48.514 Cannot find device "nvmf_init_if" 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:48.514 Cannot find device "nvmf_init_if2" 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:48.514 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:48.515 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:48.515 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:48.515 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:48.515 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:48.515 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:48.515 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.515 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.515 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.515 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:48.774 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.774 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.202 ms 00:15:48.774 00:15:48.774 --- 10.0.0.3 ping statistics --- 00:15:48.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.774 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:48.774 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:48.774 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:15:48.774 00:15:48.774 --- 10.0.0.4 ping statistics --- 00:15:48.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.774 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:48.774 00:15:48.774 --- 10.0.0.1 ping statistics --- 00:15:48.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.774 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:48.774 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:48.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:48.775 00:15:48.775 --- 10.0.0.2 ping statistics --- 00:15:48.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.775 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72971 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72971 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 72971 ']' 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:48.775 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:48.775 [2024-11-15 10:39:14.204056] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:48.775 [2024-11-15 10:39:14.204305] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.033 [2024-11-15 10:39:14.354289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.033 [2024-11-15 10:39:14.418537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.033 [2024-11-15 10:39:14.418601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.033 [2024-11-15 10:39:14.418616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.033 [2024-11-15 10:39:14.418627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.033 [2024-11-15 10:39:14.418635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.033 [2024-11-15 10:39:14.419089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.033 [2024-11-15 10:39:14.477387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.2Sp 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.2Sp 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.2Sp 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.2Sp 00:15:49.292 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:49.550 [2024-11-15 10:39:14.886283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.550 [2024-11-15 10:39:14.902214] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:49.550 [2024-11-15 10:39:14.902455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:49.550 malloc0 00:15:49.550 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:49.550 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73005 00:15:49.550 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73005 /var/tmp/bdevperf.sock 00:15:49.550 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:49.550 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 73005 ']' 00:15:49.550 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:49.550 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:49.550 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:49.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:49.550 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:49.550 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:49.808 [2024-11-15 10:39:15.078614] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:49.808 [2024-11-15 10:39:15.078756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73005 ] 00:15:49.808 [2024-11-15 10:39:15.239450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.066 [2024-11-15 10:39:15.307354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.066 [2024-11-15 10:39:15.364958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:50.633 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:50.633 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:15:50.633 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.2Sp 00:15:50.890 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:51.149 [2024-11-15 10:39:16.572400] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:51.406 TLSTESTn1 00:15:51.406 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.406 Running I/O for 10 seconds... 00:15:53.276 3870.00 IOPS, 15.12 MiB/s [2024-11-15T10:39:20.148Z] 3940.00 IOPS, 15.39 MiB/s [2024-11-15T10:39:21.082Z] 3963.00 IOPS, 15.48 MiB/s [2024-11-15T10:39:22.016Z] 3977.25 IOPS, 15.54 MiB/s [2024-11-15T10:39:22.961Z] 3977.80 IOPS, 15.54 MiB/s [2024-11-15T10:39:23.896Z] 3969.50 IOPS, 15.51 MiB/s [2024-11-15T10:39:24.831Z] 3975.57 IOPS, 15.53 MiB/s [2024-11-15T10:39:26.205Z] 3973.38 IOPS, 15.52 MiB/s [2024-11-15T10:39:27.139Z] 3971.33 IOPS, 15.51 MiB/s [2024-11-15T10:39:27.139Z] 3974.00 IOPS, 15.52 MiB/s 00:16:01.641 Latency(us) 00:16:01.641 [2024-11-15T10:39:27.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.641 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:01.641 Verification LBA range: start 0x0 length 0x2000 00:16:01.641 TLSTESTn1 : 10.02 3979.11 15.54 0.00 0.00 32103.11 7149.38 25976.09 00:16:01.641 [2024-11-15T10:39:27.140Z] =================================================================================================================== 00:16:01.642 [2024-11-15T10:39:27.140Z] Total : 3979.11 15.54 0.00 0.00 32103.11 7149.38 25976.09 00:16:01.642 { 00:16:01.642 "results": [ 00:16:01.642 { 00:16:01.642 "job": "TLSTESTn1", 00:16:01.642 "core_mask": "0x4", 00:16:01.642 "workload": "verify", 00:16:01.642 "status": "finished", 00:16:01.642 "verify_range": { 00:16:01.642 "start": 0, 00:16:01.642 "length": 8192 00:16:01.642 }, 00:16:01.642 "queue_depth": 128, 00:16:01.642 "io_size": 4096, 00:16:01.642 "runtime": 10.018834, 00:16:01.642 "iops": 3979.105752226257, 00:16:01.642 "mibps": 15.543381844633817, 00:16:01.642 "io_failed": 0, 00:16:01.642 "io_timeout": 0, 00:16:01.642 "avg_latency_us": 32103.10608611576, 00:16:01.642 "min_latency_us": 7149.381818181818, 00:16:01.642 "max_latency_us": 25976.087272727273 00:16:01.642 } 00:16:01.642 ], 00:16:01.642 "core_count": 1 00:16:01.642 } 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:01.642 nvmf_trace.0 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73005 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 73005 ']' 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 73005 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73005 00:16:01.642 killing process with pid 73005 00:16:01.642 Received shutdown signal, test time was about 10.000000 seconds 00:16:01.642 00:16:01.642 Latency(us) 00:16:01.642 [2024-11-15T10:39:27.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.642 [2024-11-15T10:39:27.140Z] =================================================================================================================== 00:16:01.642 [2024-11-15T10:39:27.140Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73005' 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 73005 00:16:01.642 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 73005 00:16:01.642 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:01.642 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:01.642 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:01.900 rmmod nvme_tcp 00:16:01.900 rmmod nvme_fabrics 00:16:01.900 rmmod nvme_keyring 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72971 ']' 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72971 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 72971 ']' 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 72971 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72971 00:16:01.900 killing process with pid 72971 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72971' 00:16:01.900 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 72971 00:16:01.901 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 72971 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:02.159 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.2Sp 00:16:02.418 ************************************ 00:16:02.418 END TEST nvmf_fips 00:16:02.418 ************************************ 00:16:02.418 00:16:02.418 real 0m14.374s 00:16:02.418 user 0m20.242s 00:16:02.418 sys 0m5.701s 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:02.418 ************************************ 00:16:02.418 START TEST nvmf_control_msg_list 00:16:02.418 ************************************ 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:02.418 * Looking for test storage... 00:16:02.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:16:02.418 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:02.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.678 --rc genhtml_branch_coverage=1 00:16:02.678 --rc genhtml_function_coverage=1 00:16:02.678 --rc genhtml_legend=1 00:16:02.678 --rc geninfo_all_blocks=1 00:16:02.678 --rc geninfo_unexecuted_blocks=1 00:16:02.678 00:16:02.678 ' 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:02.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.678 --rc genhtml_branch_coverage=1 00:16:02.678 --rc genhtml_function_coverage=1 00:16:02.678 --rc genhtml_legend=1 00:16:02.678 --rc geninfo_all_blocks=1 00:16:02.678 --rc geninfo_unexecuted_blocks=1 00:16:02.678 00:16:02.678 ' 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:02.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.678 --rc genhtml_branch_coverage=1 00:16:02.678 --rc genhtml_function_coverage=1 00:16:02.678 --rc genhtml_legend=1 00:16:02.678 --rc geninfo_all_blocks=1 00:16:02.678 --rc geninfo_unexecuted_blocks=1 00:16:02.678 00:16:02.678 ' 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:02.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.678 --rc genhtml_branch_coverage=1 00:16:02.678 --rc genhtml_function_coverage=1 00:16:02.678 --rc genhtml_legend=1 00:16:02.678 --rc geninfo_all_blocks=1 00:16:02.678 --rc geninfo_unexecuted_blocks=1 00:16:02.678 00:16:02.678 ' 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:16:02.678 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.679 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:02.679 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:02.679 Cannot find device "nvmf_init_br" 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:02.679 Cannot find device "nvmf_init_br2" 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:02.679 Cannot find device "nvmf_tgt_br" 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.679 Cannot find device "nvmf_tgt_br2" 00:16:02.679 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:02.680 Cannot find device "nvmf_init_br" 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:02.680 Cannot find device "nvmf_init_br2" 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:02.680 Cannot find device "nvmf_tgt_br" 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:02.680 Cannot find device "nvmf_tgt_br2" 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:02.680 Cannot find device "nvmf_br" 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:02.680 Cannot find device "nvmf_init_if" 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:02.680 Cannot find device "nvmf_init_if2" 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:02.680 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:02.939 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:02.939 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:16:02.939 00:16:02.939 --- 10.0.0.3 ping statistics --- 00:16:02.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.939 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:02.939 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:02.939 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:16:02.939 00:16:02.939 --- 10.0.0.4 ping statistics --- 00:16:02.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.939 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:02.939 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:02.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:02.940 00:16:02.940 --- 10.0.0.1 ping statistics --- 00:16:02.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.940 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:02.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:16:02.940 00:16:02.940 --- 10.0.0.2 ping statistics --- 00:16:02.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.940 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:02.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73396 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73396 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 73396 ']' 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:02.940 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:03.199 [2024-11-15 10:39:28.460737] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:16:03.199 [2024-11-15 10:39:28.461069] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.199 [2024-11-15 10:39:28.604827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.199 [2024-11-15 10:39:28.666006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.199 [2024-11-15 10:39:28.666060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.199 [2024-11-15 10:39:28.666073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.199 [2024-11-15 10:39:28.666081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.199 [2024-11-15 10:39:28.666088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.199 [2024-11-15 10:39:28.666468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.457 [2024-11-15 10:39:28.722175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:04.024 [2024-11-15 10:39:29.455315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:04.024 Malloc0 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:04.024 [2024-11-15 10:39:29.494698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73428 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73429 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73430 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:04.024 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73428 00:16:04.283 [2024-11-15 10:39:29.673280] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:04.283 [2024-11-15 10:39:29.673850] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:04.283 [2024-11-15 10:39:29.703065] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:05.290 Initializing NVMe Controllers 00:16:05.290 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:05.290 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:16:05.290 Initialization complete. Launching workers. 00:16:05.290 ======================================================== 00:16:05.290 Latency(us) 00:16:05.290 Device Information : IOPS MiB/s Average min max 00:16:05.290 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3436.96 13.43 290.60 167.76 618.29 00:16:05.290 ======================================================== 00:16:05.290 Total : 3436.96 13.43 290.60 167.76 618.29 00:16:05.290 00:16:05.290 Initializing NVMe Controllers 00:16:05.290 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:05.290 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:16:05.290 Initialization complete. Launching workers. 00:16:05.290 ======================================================== 00:16:05.290 Latency(us) 00:16:05.290 Device Information : IOPS MiB/s Average min max 00:16:05.290 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3443.00 13.45 290.12 178.29 619.12 00:16:05.290 ======================================================== 00:16:05.290 Total : 3443.00 13.45 290.12 178.29 619.12 00:16:05.290 00:16:05.290 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73429 00:16:05.290 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73430 00:16:05.290 Initializing NVMe Controllers 00:16:05.290 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:05.290 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:16:05.290 Initialization complete. Launching workers. 00:16:05.290 ======================================================== 00:16:05.290 Latency(us) 00:16:05.290 Device Information : IOPS MiB/s Average min max 00:16:05.290 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3536.00 13.81 282.42 118.86 614.47 00:16:05.290 ======================================================== 00:16:05.290 Total : 3536.00 13.81 282.42 118.86 614.47 00:16:05.290 00:16:05.290 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:05.290 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:16:05.290 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:05.290 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:16:05.549 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:05.549 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:16:05.549 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:05.549 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:05.549 rmmod nvme_tcp 00:16:05.549 rmmod nvme_fabrics 00:16:05.549 rmmod nvme_keyring 00:16:05.549 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:05.549 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:16:05.549 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:16:05.550 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73396 ']' 00:16:05.550 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73396 00:16:05.550 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 73396 ']' 00:16:05.550 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 73396 00:16:05.550 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:16:05.550 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:05.550 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73396 00:16:05.550 killing process with pid 73396 00:16:05.550 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:05.550 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:05.550 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73396' 00:16:05.550 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 73396 00:16:05.550 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 73396 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.808 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:16:06.067 00:16:06.067 real 0m3.535s 00:16:06.067 user 0m5.543s 00:16:06.067 sys 0m1.383s 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:06.067 ************************************ 00:16:06.067 END TEST nvmf_control_msg_list 00:16:06.067 ************************************ 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:06.067 ************************************ 00:16:06.067 START TEST nvmf_wait_for_buf 00:16:06.067 ************************************ 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:06.067 * Looking for test storage... 00:16:06.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:06.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.067 --rc genhtml_branch_coverage=1 00:16:06.067 --rc genhtml_function_coverage=1 00:16:06.067 --rc genhtml_legend=1 00:16:06.067 --rc geninfo_all_blocks=1 00:16:06.067 --rc geninfo_unexecuted_blocks=1 00:16:06.067 00:16:06.067 ' 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:06.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.067 --rc genhtml_branch_coverage=1 00:16:06.067 --rc genhtml_function_coverage=1 00:16:06.067 --rc genhtml_legend=1 00:16:06.067 --rc geninfo_all_blocks=1 00:16:06.067 --rc geninfo_unexecuted_blocks=1 00:16:06.067 00:16:06.067 ' 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:06.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.067 --rc genhtml_branch_coverage=1 00:16:06.067 --rc genhtml_function_coverage=1 00:16:06.067 --rc genhtml_legend=1 00:16:06.067 --rc geninfo_all_blocks=1 00:16:06.067 --rc geninfo_unexecuted_blocks=1 00:16:06.067 00:16:06.067 ' 00:16:06.067 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:06.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.067 --rc genhtml_branch_coverage=1 00:16:06.067 --rc genhtml_function_coverage=1 00:16:06.067 --rc genhtml_legend=1 00:16:06.067 --rc geninfo_all_blocks=1 00:16:06.067 --rc geninfo_unexecuted_blocks=1 00:16:06.068 00:16:06.068 ' 00:16:06.068 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:06.068 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:16:06.068 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.068 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.068 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.068 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.068 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.068 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.068 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.068 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.068 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.068 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:06.327 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:06.327 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:06.328 Cannot find device "nvmf_init_br" 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:06.328 Cannot find device "nvmf_init_br2" 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:06.328 Cannot find device "nvmf_tgt_br" 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:06.328 Cannot find device "nvmf_tgt_br2" 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:06.328 Cannot find device "nvmf_init_br" 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:06.328 Cannot find device "nvmf_init_br2" 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:06.328 Cannot find device "nvmf_tgt_br" 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:06.328 Cannot find device "nvmf_tgt_br2" 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:06.328 Cannot find device "nvmf_br" 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:06.328 Cannot find device "nvmf_init_if" 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:06.328 Cannot find device "nvmf_init_if2" 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:06.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:06.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:06.328 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:06.586 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:06.587 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:06.587 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:16:06.587 00:16:06.587 --- 10.0.0.3 ping statistics --- 00:16:06.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.587 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:06.587 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:06.587 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:16:06.587 00:16:06.587 --- 10.0.0.4 ping statistics --- 00:16:06.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.587 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:06.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:06.587 00:16:06.587 --- 10.0.0.1 ping statistics --- 00:16:06.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.587 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:06.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:16:06.587 00:16:06.587 --- 10.0.0.2 ping statistics --- 00:16:06.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.587 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73670 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73670 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 73670 ']' 00:16:06.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:06.587 10:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:06.587 [2024-11-15 10:39:32.053467] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:16:06.587 [2024-11-15 10:39:32.053800] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.845 [2024-11-15 10:39:32.200221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.845 [2024-11-15 10:39:32.253100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.845 [2024-11-15 10:39:32.253362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.846 [2024-11-15 10:39:32.253383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.846 [2024-11-15 10:39:32.253392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.846 [2024-11-15 10:39:32.253400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.846 [2024-11-15 10:39:32.253819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.846 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:06.846 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:16:06.846 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:06.846 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:06.846 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:07.103 [2024-11-15 10:39:32.420251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.103 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:07.104 Malloc0 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:07.104 [2024-11-15 10:39:32.484715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:07.104 [2024-11-15 10:39:32.512816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.104 10:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:07.361 [2024-11-15 10:39:32.714678] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:08.734 Initializing NVMe Controllers 00:16:08.734 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:08.734 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:16:08.734 Initialization complete. Launching workers. 00:16:08.734 ======================================================== 00:16:08.734 Latency(us) 00:16:08.734 Device Information : IOPS MiB/s Average min max 00:16:08.734 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.99 62.50 8000.17 6966.09 9760.76 00:16:08.734 ======================================================== 00:16:08.734 Total : 499.99 62.50 8000.17 6966.09 9760.76 00:16:08.734 00:16:08.734 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:16:08.734 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:16:08.734 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.734 10:39:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:08.734 rmmod nvme_tcp 00:16:08.734 rmmod nvme_fabrics 00:16:08.734 rmmod nvme_keyring 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73670 ']' 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73670 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 73670 ']' 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 73670 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73670 00:16:08.734 killing process with pid 73670 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73670' 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 73670 00:16:08.734 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 73670 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:08.992 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:16:09.250 00:16:09.250 real 0m3.256s 00:16:09.250 user 0m2.574s 00:16:09.250 sys 0m0.798s 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:09.250 ************************************ 00:16:09.250 END TEST nvmf_wait_for_buf 00:16:09.250 ************************************ 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:09.250 ************************************ 00:16:09.250 START TEST nvmf_nsid 00:16:09.250 ************************************ 00:16:09.250 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:09.508 * Looking for test storage... 00:16:09.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.508 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:09.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.509 --rc genhtml_branch_coverage=1 00:16:09.509 --rc genhtml_function_coverage=1 00:16:09.509 --rc genhtml_legend=1 00:16:09.509 --rc geninfo_all_blocks=1 00:16:09.509 --rc geninfo_unexecuted_blocks=1 00:16:09.509 00:16:09.509 ' 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:09.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.509 --rc genhtml_branch_coverage=1 00:16:09.509 --rc genhtml_function_coverage=1 00:16:09.509 --rc genhtml_legend=1 00:16:09.509 --rc geninfo_all_blocks=1 00:16:09.509 --rc geninfo_unexecuted_blocks=1 00:16:09.509 00:16:09.509 ' 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:09.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.509 --rc genhtml_branch_coverage=1 00:16:09.509 --rc genhtml_function_coverage=1 00:16:09.509 --rc genhtml_legend=1 00:16:09.509 --rc geninfo_all_blocks=1 00:16:09.509 --rc geninfo_unexecuted_blocks=1 00:16:09.509 00:16:09.509 ' 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:09.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.509 --rc genhtml_branch_coverage=1 00:16:09.509 --rc genhtml_function_coverage=1 00:16:09.509 --rc genhtml_legend=1 00:16:09.509 --rc geninfo_all_blocks=1 00:16:09.509 --rc geninfo_unexecuted_blocks=1 00:16:09.509 00:16:09.509 ' 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:09.509 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:09.509 Cannot find device "nvmf_init_br" 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:09.509 Cannot find device "nvmf_init_br2" 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:09.509 Cannot find device "nvmf_tgt_br" 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:09.509 Cannot find device "nvmf_tgt_br2" 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:09.509 Cannot find device "nvmf_init_br" 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:09.509 Cannot find device "nvmf_init_br2" 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:09.509 Cannot find device "nvmf_tgt_br" 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:09.509 Cannot find device "nvmf_tgt_br2" 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:16:09.509 10:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:09.509 Cannot find device "nvmf_br" 00:16:09.509 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:16:09.509 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:09.767 Cannot find device "nvmf_init_if" 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:09.767 Cannot find device "nvmf_init_if2" 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:09.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:09.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:09.767 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:10.025 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:10.026 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:10.026 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:16:10.026 00:16:10.026 --- 10.0.0.3 ping statistics --- 00:16:10.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.026 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:10.026 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:10.026 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:16:10.026 00:16:10.026 --- 10.0.0.4 ping statistics --- 00:16:10.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.026 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:10.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:16:10.026 00:16:10.026 --- 10.0.0.1 ping statistics --- 00:16:10.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.026 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:10.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:16:10.026 00:16:10.026 --- 10.0.0.2 ping statistics --- 00:16:10.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.026 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73937 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73937 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 73937 ']' 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:10.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:10.026 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:10.026 [2024-11-15 10:39:35.373960] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:16:10.026 [2024-11-15 10:39:35.374038] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.026 [2024-11-15 10:39:35.519391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.284 [2024-11-15 10:39:35.579644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.284 [2024-11-15 10:39:35.579715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.284 [2024-11-15 10:39:35.579729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.284 [2024-11-15 10:39:35.579739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.284 [2024-11-15 10:39:35.579748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.284 [2024-11-15 10:39:35.580190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.284 [2024-11-15 10:39:35.636690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:10.284 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:10.284 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:16:10.284 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:10.284 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:10.284 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:10.284 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.284 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:10.284 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73956 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=cad51fc6-e60a-4cf5-ab64-226adfd48b5e 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=b2531f31-405f-45e0-8efb-92ac66c172df 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=579e2e73-94a0-4ff8-96a7-7c560123fa79 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.285 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:10.285 null0 00:16:10.543 null1 00:16:10.543 null2 00:16:10.543 [2024-11-15 10:39:35.796091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.543 [2024-11-15 10:39:35.803491] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:16:10.543 [2024-11-15 10:39:35.803584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73956 ] 00:16:10.543 [2024-11-15 10:39:35.820208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:10.543 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.543 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73956 /var/tmp/tgt2.sock 00:16:10.543 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 73956 ']' 00:16:10.543 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:16:10.543 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:10.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:16:10.543 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:16:10.543 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:10.543 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:10.543 [2024-11-15 10:39:35.946400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.543 [2024-11-15 10:39:36.000876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.802 [2024-11-15 10:39:36.068342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:10.802 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:10.802 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:16:10.802 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:16:11.369 [2024-11-15 10:39:36.664049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.369 [2024-11-15 10:39:36.680129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:16:11.369 nvme0n1 nvme0n2 00:16:11.369 nvme1n1 00:16:11.369 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:16:11.369 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:16:11.369 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:11.369 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:16:11.369 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:16:11.369 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:16:11.369 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:16:11.369 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:16:11.369 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:16:11.369 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:16:11.369 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:16:11.627 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:16:11.627 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:16:11.627 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:16:11.627 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:16:11.627 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:16:12.586 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:16:12.586 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:16:12.586 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid cad51fc6-e60a-4cf5-ab64-226adfd48b5e 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=cad51fc6e60a4cf5ab64226adfd48b5e 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CAD51FC6E60A4CF5AB64226ADFD48B5E 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ CAD51FC6E60A4CF5AB64226ADFD48B5E == \C\A\D\5\1\F\C\6\E\6\0\A\4\C\F\5\A\B\6\4\2\2\6\A\D\F\D\4\8\B\5\E ]] 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid b2531f31-405f-45e0-8efb-92ac66c172df 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:12.587 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b2531f31405f45e08efb92ac66c172df 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B2531F31405F45E08EFB92AC66C172DF 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ B2531F31405F45E08EFB92AC66C172DF == \B\2\5\3\1\F\3\1\4\0\5\F\4\5\E\0\8\E\F\B\9\2\A\C\6\6\C\1\7\2\D\F ]] 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 579e2e73-94a0-4ff8-96a7-7c560123fa79 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:16:12.587 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=579e2e7394a04ff896a77c560123fa79 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 579E2E7394A04FF896A77C560123FA79 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 579E2E7394A04FF896A77C560123FA79 == \5\7\9\E\2\E\7\3\9\4\A\0\4\F\F\8\9\6\A\7\7\C\5\6\0\1\2\3\F\A\7\9 ]] 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73956 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 73956 ']' 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 73956 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73956 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:12.845 killing process with pid 73956 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73956' 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 73956 00:16:12.845 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 73956 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:13.411 rmmod nvme_tcp 00:16:13.411 rmmod nvme_fabrics 00:16:13.411 rmmod nvme_keyring 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73937 ']' 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73937 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 73937 ']' 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 73937 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73937 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:13.411 killing process with pid 73937 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73937' 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 73937 00:16:13.411 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 73937 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:13.668 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:16:13.926 00:16:13.926 real 0m4.622s 00:16:13.926 user 0m6.762s 00:16:13.926 sys 0m1.677s 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:13.926 ************************************ 00:16:13.926 END TEST nvmf_nsid 00:16:13.926 ************************************ 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:13.926 00:16:13.926 real 5m16.022s 00:16:13.926 user 11m7.782s 00:16:13.926 sys 1m9.576s 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:13.926 10:39:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.926 ************************************ 00:16:13.926 END TEST nvmf_target_extra 00:16:13.926 ************************************ 00:16:13.926 10:39:39 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:13.926 10:39:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:13.926 10:39:39 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:13.926 10:39:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:13.926 ************************************ 00:16:13.926 START TEST nvmf_host 00:16:13.926 ************************************ 00:16:13.926 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:14.191 * Looking for test storage... 00:16:14.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:14.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.191 --rc genhtml_branch_coverage=1 00:16:14.191 --rc genhtml_function_coverage=1 00:16:14.191 --rc genhtml_legend=1 00:16:14.191 --rc geninfo_all_blocks=1 00:16:14.191 --rc geninfo_unexecuted_blocks=1 00:16:14.191 00:16:14.191 ' 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:14.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.191 --rc genhtml_branch_coverage=1 00:16:14.191 --rc genhtml_function_coverage=1 00:16:14.191 --rc genhtml_legend=1 00:16:14.191 --rc geninfo_all_blocks=1 00:16:14.191 --rc geninfo_unexecuted_blocks=1 00:16:14.191 00:16:14.191 ' 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:14.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.191 --rc genhtml_branch_coverage=1 00:16:14.191 --rc genhtml_function_coverage=1 00:16:14.191 --rc genhtml_legend=1 00:16:14.191 --rc geninfo_all_blocks=1 00:16:14.191 --rc geninfo_unexecuted_blocks=1 00:16:14.191 00:16:14.191 ' 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:14.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.191 --rc genhtml_branch_coverage=1 00:16:14.191 --rc genhtml_function_coverage=1 00:16:14.191 --rc genhtml_legend=1 00:16:14.191 --rc geninfo_all_blocks=1 00:16:14.191 --rc geninfo_unexecuted_blocks=1 00:16:14.191 00:16:14.191 ' 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:14.191 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:14.191 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:14.192 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:14.192 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.192 ************************************ 00:16:14.192 START TEST nvmf_identify 00:16:14.192 ************************************ 00:16:14.192 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:14.192 * Looking for test storage... 00:16:14.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:14.192 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:14.192 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:14.192 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:14.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.451 --rc genhtml_branch_coverage=1 00:16:14.451 --rc genhtml_function_coverage=1 00:16:14.451 --rc genhtml_legend=1 00:16:14.451 --rc geninfo_all_blocks=1 00:16:14.451 --rc geninfo_unexecuted_blocks=1 00:16:14.451 00:16:14.451 ' 00:16:14.451 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:14.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.451 --rc genhtml_branch_coverage=1 00:16:14.451 --rc genhtml_function_coverage=1 00:16:14.452 --rc genhtml_legend=1 00:16:14.452 --rc geninfo_all_blocks=1 00:16:14.452 --rc geninfo_unexecuted_blocks=1 00:16:14.452 00:16:14.452 ' 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:14.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.452 --rc genhtml_branch_coverage=1 00:16:14.452 --rc genhtml_function_coverage=1 00:16:14.452 --rc genhtml_legend=1 00:16:14.452 --rc geninfo_all_blocks=1 00:16:14.452 --rc geninfo_unexecuted_blocks=1 00:16:14.452 00:16:14.452 ' 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:14.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.452 --rc genhtml_branch_coverage=1 00:16:14.452 --rc genhtml_function_coverage=1 00:16:14.452 --rc genhtml_legend=1 00:16:14.452 --rc geninfo_all_blocks=1 00:16:14.452 --rc geninfo_unexecuted_blocks=1 00:16:14.452 00:16:14.452 ' 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:14.452 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:14.452 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:14.453 Cannot find device "nvmf_init_br" 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:14.453 Cannot find device "nvmf_init_br2" 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:14.453 Cannot find device "nvmf_tgt_br" 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:14.453 Cannot find device "nvmf_tgt_br2" 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:14.453 Cannot find device "nvmf_init_br" 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:14.453 Cannot find device "nvmf_init_br2" 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:14.453 Cannot find device "nvmf_tgt_br" 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:14.453 Cannot find device "nvmf_tgt_br2" 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:14.453 Cannot find device "nvmf_br" 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:14.453 Cannot find device "nvmf_init_if" 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:14.453 Cannot find device "nvmf_init_if2" 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:14.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:14.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:14.453 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:14.712 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:14.712 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:14.712 10:39:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:14.712 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:14.712 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:14.712 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:14.713 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:14.713 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:16:14.713 00:16:14.713 --- 10.0.0.3 ping statistics --- 00:16:14.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.713 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:14.713 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:14.713 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:16:14.713 00:16:14.713 --- 10.0.0.4 ping statistics --- 00:16:14.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.713 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:14.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:14.713 00:16:14.713 --- 10.0.0.1 ping statistics --- 00:16:14.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.713 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:14.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:14.713 00:16:14.713 --- 10.0.0.2 ping statistics --- 00:16:14.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.713 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74314 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74314 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 74314 ']' 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:14.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:14.713 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:14.971 [2024-11-15 10:39:40.265895] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:16:14.971 [2024-11-15 10:39:40.266016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.971 [2024-11-15 10:39:40.420700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.230 [2024-11-15 10:39:40.490263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.230 [2024-11-15 10:39:40.490336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.230 [2024-11-15 10:39:40.490350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.230 [2024-11-15 10:39:40.490361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.230 [2024-11-15 10:39:40.490370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.230 [2024-11-15 10:39:40.491652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.230 [2024-11-15 10:39:40.491784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.230 [2024-11-15 10:39:40.491825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.230 [2024-11-15 10:39:40.491828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.230 [2024-11-15 10:39:40.548089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.230 [2024-11-15 10:39:40.627476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.230 Malloc0 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.230 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.490 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.491 [2024-11-15 10:39:40.746005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.491 [ 00:16:15.491 { 00:16:15.491 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:15.491 "subtype": "Discovery", 00:16:15.491 "listen_addresses": [ 00:16:15.491 { 00:16:15.491 "trtype": "TCP", 00:16:15.491 "adrfam": "IPv4", 00:16:15.491 "traddr": "10.0.0.3", 00:16:15.491 "trsvcid": "4420" 00:16:15.491 } 00:16:15.491 ], 00:16:15.491 "allow_any_host": true, 00:16:15.491 "hosts": [] 00:16:15.491 }, 00:16:15.491 { 00:16:15.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.491 "subtype": "NVMe", 00:16:15.491 "listen_addresses": [ 00:16:15.491 { 00:16:15.491 "trtype": "TCP", 00:16:15.491 "adrfam": "IPv4", 00:16:15.491 "traddr": "10.0.0.3", 00:16:15.491 "trsvcid": "4420" 00:16:15.491 } 00:16:15.491 ], 00:16:15.491 "allow_any_host": true, 00:16:15.491 "hosts": [], 00:16:15.491 "serial_number": "SPDK00000000000001", 00:16:15.491 "model_number": "SPDK bdev Controller", 00:16:15.491 "max_namespaces": 32, 00:16:15.491 "min_cntlid": 1, 00:16:15.491 "max_cntlid": 65519, 00:16:15.491 "namespaces": [ 00:16:15.491 { 00:16:15.491 "nsid": 1, 00:16:15.491 "bdev_name": "Malloc0", 00:16:15.491 "name": "Malloc0", 00:16:15.491 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:15.491 "eui64": "ABCDEF0123456789", 00:16:15.491 "uuid": "2d1507ea-f353-4fe2-98ab-77104a4119ab" 00:16:15.491 } 00:16:15.491 ] 00:16:15.491 } 00:16:15.491 ] 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.491 10:39:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:15.491 [2024-11-15 10:39:40.798167] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:16:15.491 [2024-11-15 10:39:40.798226] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74336 ] 00:16:15.491 [2024-11-15 10:39:40.956936] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:16:15.491 [2024-11-15 10:39:40.957005] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:15.491 [2024-11-15 10:39:40.957013] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:15.491 [2024-11-15 10:39:40.957029] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:15.491 [2024-11-15 10:39:40.957043] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:15.491 [2024-11-15 10:39:40.957368] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:16:15.491 [2024-11-15 10:39:40.957442] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd54750 0 00:16:15.491 [2024-11-15 10:39:40.971535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:15.491 [2024-11-15 10:39:40.971557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:15.491 [2024-11-15 10:39:40.971564] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:15.491 [2024-11-15 10:39:40.971568] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:15.491 [2024-11-15 10:39:40.971600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.971608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.971612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd54750) 00:16:15.491 [2024-11-15 10:39:40.971628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:15.491 [2024-11-15 10:39:40.971664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8740, cid 0, qid 0 00:16:15.491 [2024-11-15 10:39:40.979533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.491 [2024-11-15 10:39:40.979555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.491 [2024-11-15 10:39:40.979560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.979566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8740) on tqpair=0xd54750 00:16:15.491 [2024-11-15 10:39:40.979580] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:15.491 [2024-11-15 10:39:40.979589] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:16:15.491 [2024-11-15 10:39:40.979596] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:16:15.491 [2024-11-15 10:39:40.979613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.979619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.979623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd54750) 00:16:15.491 [2024-11-15 10:39:40.979632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.491 [2024-11-15 10:39:40.979660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8740, cid 0, qid 0 00:16:15.491 [2024-11-15 10:39:40.979727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.491 [2024-11-15 10:39:40.979734] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.491 [2024-11-15 10:39:40.979738] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.979742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8740) on tqpair=0xd54750 00:16:15.491 [2024-11-15 10:39:40.979748] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:16:15.491 [2024-11-15 10:39:40.979756] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:16:15.491 [2024-11-15 10:39:40.979765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.979770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.979774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd54750) 00:16:15.491 [2024-11-15 10:39:40.979782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.491 [2024-11-15 10:39:40.979800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8740, cid 0, qid 0 00:16:15.491 [2024-11-15 10:39:40.979850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.491 [2024-11-15 10:39:40.979857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.491 [2024-11-15 10:39:40.979861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.979865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8740) on tqpair=0xd54750 00:16:15.491 [2024-11-15 10:39:40.979871] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:16:15.491 [2024-11-15 10:39:40.979880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:15.491 [2024-11-15 10:39:40.979888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.979892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.979896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd54750) 00:16:15.491 [2024-11-15 10:39:40.979904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.491 [2024-11-15 10:39:40.979922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8740, cid 0, qid 0 00:16:15.491 [2024-11-15 10:39:40.979969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.491 [2024-11-15 10:39:40.979976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.491 [2024-11-15 10:39:40.979980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.979984] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8740) on tqpair=0xd54750 00:16:15.491 [2024-11-15 10:39:40.979990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:15.491 [2024-11-15 10:39:40.980000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.980005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.980009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd54750) 00:16:15.491 [2024-11-15 10:39:40.980017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.491 [2024-11-15 10:39:40.980034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8740, cid 0, qid 0 00:16:15.491 [2024-11-15 10:39:40.980089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.491 [2024-11-15 10:39:40.980096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.491 [2024-11-15 10:39:40.980100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.491 [2024-11-15 10:39:40.980104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8740) on tqpair=0xd54750 00:16:15.491 [2024-11-15 10:39:40.980109] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:15.492 [2024-11-15 10:39:40.980115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:15.492 [2024-11-15 10:39:40.980123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:15.492 [2024-11-15 10:39:40.980234] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:16:15.492 [2024-11-15 10:39:40.980240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:15.492 [2024-11-15 10:39:40.980250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd54750) 00:16:15.492 [2024-11-15 10:39:40.980266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.492 [2024-11-15 10:39:40.980286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8740, cid 0, qid 0 00:16:15.492 [2024-11-15 10:39:40.980336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.492 [2024-11-15 10:39:40.980343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.492 [2024-11-15 10:39:40.980347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8740) on tqpair=0xd54750 00:16:15.492 [2024-11-15 10:39:40.980357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:15.492 [2024-11-15 10:39:40.980367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd54750) 00:16:15.492 [2024-11-15 10:39:40.980384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.492 [2024-11-15 10:39:40.980401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8740, cid 0, qid 0 00:16:15.492 [2024-11-15 10:39:40.980447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.492 [2024-11-15 10:39:40.980454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.492 [2024-11-15 10:39:40.980458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8740) on tqpair=0xd54750 00:16:15.492 [2024-11-15 10:39:40.980467] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:15.492 [2024-11-15 10:39:40.980473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:15.492 [2024-11-15 10:39:40.980481] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:16:15.492 [2024-11-15 10:39:40.980495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:15.492 [2024-11-15 10:39:40.980507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd54750) 00:16:15.492 [2024-11-15 10:39:40.980535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.492 [2024-11-15 10:39:40.980556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8740, cid 0, qid 0 00:16:15.492 [2024-11-15 10:39:40.980655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.492 [2024-11-15 10:39:40.980663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.492 [2024-11-15 10:39:40.980667] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980671] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd54750): datao=0, datal=4096, cccid=0 00:16:15.492 [2024-11-15 10:39:40.980676] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb8740) on tqpair(0xd54750): expected_datao=0, payload_size=4096 00:16:15.492 [2024-11-15 10:39:40.980681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980691] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980695] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.492 [2024-11-15 10:39:40.980711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.492 [2024-11-15 10:39:40.980715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8740) on tqpair=0xd54750 00:16:15.492 [2024-11-15 10:39:40.980728] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:16:15.492 [2024-11-15 10:39:40.980734] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:16:15.492 [2024-11-15 10:39:40.980739] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:16:15.492 [2024-11-15 10:39:40.980744] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:16:15.492 [2024-11-15 10:39:40.980749] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:16:15.492 [2024-11-15 10:39:40.980755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:16:15.492 [2024-11-15 10:39:40.980769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:15.492 [2024-11-15 10:39:40.980778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd54750) 00:16:15.492 [2024-11-15 10:39:40.980795] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.492 [2024-11-15 10:39:40.980815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8740, cid 0, qid 0 00:16:15.492 [2024-11-15 10:39:40.980869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.492 [2024-11-15 10:39:40.980876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.492 [2024-11-15 10:39:40.980880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8740) on tqpair=0xd54750 00:16:15.492 [2024-11-15 10:39:40.980893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd54750) 00:16:15.492 [2024-11-15 10:39:40.980908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.492 [2024-11-15 10:39:40.980915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd54750) 00:16:15.492 [2024-11-15 10:39:40.980929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.492 [2024-11-15 10:39:40.980936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd54750) 00:16:15.492 [2024-11-15 10:39:40.980950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.492 [2024-11-15 10:39:40.980956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.980964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd54750) 00:16:15.492 [2024-11-15 10:39:40.980970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.492 [2024-11-15 10:39:40.980976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:15.492 [2024-11-15 10:39:40.980991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:15.492 [2024-11-15 10:39:40.980999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.981003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd54750) 00:16:15.492 [2024-11-15 10:39:40.981011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.492 [2024-11-15 10:39:40.981032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8740, cid 0, qid 0 00:16:15.492 [2024-11-15 10:39:40.981038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb88c0, cid 1, qid 0 00:16:15.492 [2024-11-15 10:39:40.981043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8a40, cid 2, qid 0 00:16:15.492 [2024-11-15 10:39:40.981048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8bc0, cid 3, qid 0 00:16:15.492 [2024-11-15 10:39:40.981053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8d40, cid 4, qid 0 00:16:15.492 [2024-11-15 10:39:40.981145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.492 [2024-11-15 10:39:40.981152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.492 [2024-11-15 10:39:40.981156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.981160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8d40) on tqpair=0xd54750 00:16:15.492 [2024-11-15 10:39:40.981166] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:16:15.492 [2024-11-15 10:39:40.981171] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:16:15.492 [2024-11-15 10:39:40.981184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.492 [2024-11-15 10:39:40.981189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd54750) 00:16:15.492 [2024-11-15 10:39:40.981196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.492 [2024-11-15 10:39:40.981214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8d40, cid 4, qid 0 00:16:15.492 [2024-11-15 10:39:40.981274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.492 [2024-11-15 10:39:40.981280] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.493 [2024-11-15 10:39:40.981284] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981288] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd54750): datao=0, datal=4096, cccid=4 00:16:15.493 [2024-11-15 10:39:40.981293] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb8d40) on tqpair(0xd54750): expected_datao=0, payload_size=4096 00:16:15.493 [2024-11-15 10:39:40.981298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981306] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981310] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.493 [2024-11-15 10:39:40.981325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.493 [2024-11-15 10:39:40.981328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8d40) on tqpair=0xd54750 00:16:15.493 [2024-11-15 10:39:40.981346] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:16:15.493 [2024-11-15 10:39:40.981376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd54750) 00:16:15.493 [2024-11-15 10:39:40.981404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.493 [2024-11-15 10:39:40.981412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd54750) 00:16:15.493 [2024-11-15 10:39:40.981427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.493 [2024-11-15 10:39:40.981454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8d40, cid 4, qid 0 00:16:15.493 [2024-11-15 10:39:40.981462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8ec0, cid 5, qid 0 00:16:15.493 [2024-11-15 10:39:40.981585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.493 [2024-11-15 10:39:40.981608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.493 [2024-11-15 10:39:40.981614] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981619] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd54750): datao=0, datal=1024, cccid=4 00:16:15.493 [2024-11-15 10:39:40.981624] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb8d40) on tqpair(0xd54750): expected_datao=0, payload_size=1024 00:16:15.493 [2024-11-15 10:39:40.981629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981637] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981641] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.493 [2024-11-15 10:39:40.981653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.493 [2024-11-15 10:39:40.981657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8ec0) on tqpair=0xd54750 00:16:15.493 [2024-11-15 10:39:40.981689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.493 [2024-11-15 10:39:40.981697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.493 [2024-11-15 10:39:40.981701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8d40) on tqpair=0xd54750 00:16:15.493 [2024-11-15 10:39:40.981719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd54750) 00:16:15.493 [2024-11-15 10:39:40.981732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.493 [2024-11-15 10:39:40.981758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8d40, cid 4, qid 0 00:16:15.493 [2024-11-15 10:39:40.981828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.493 [2024-11-15 10:39:40.981836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.493 [2024-11-15 10:39:40.981840] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981843] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd54750): datao=0, datal=3072, cccid=4 00:16:15.493 [2024-11-15 10:39:40.981848] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb8d40) on tqpair(0xd54750): expected_datao=0, payload_size=3072 00:16:15.493 [2024-11-15 10:39:40.981853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981860] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981864] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.493 [2024-11-15 10:39:40.981879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.493 [2024-11-15 10:39:40.981883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8d40) on tqpair=0xd54750 00:16:15.493 [2024-11-15 10:39:40.981897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.981902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd54750) 00:16:15.493 [2024-11-15 10:39:40.981910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.493 [2024-11-15 10:39:40.981934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8d40, cid 4, qid 0 00:16:15.493 [2024-11-15 10:39:40.981997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.493 [2024-11-15 10:39:40.982004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.493 [2024-11-15 10:39:40.982008] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.982012] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd54750): datao=0, datal=8, cccid=4 00:16:15.493 [2024-11-15 10:39:40.982016] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb8d40) on tqpair(0xd54750): expected_datao=0, payload_size=8 00:16:15.493 [2024-11-15 10:39:40.982021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.982028] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.982032] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.982047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.493 [2024-11-15 10:39:40.982054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.493 [2024-11-15 10:39:40.982058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.493 [2024-11-15 10:39:40.982062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8d40) on tqpair=0xd54750 00:16:15.493 ===================================================== 00:16:15.493 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:15.493 ===================================================== 00:16:15.493 Controller Capabilities/Features 00:16:15.493 ================================ 00:16:15.493 Vendor ID: 0000 00:16:15.493 Subsystem Vendor ID: 0000 00:16:15.493 Serial Number: .................... 00:16:15.493 Model Number: ........................................ 00:16:15.493 Firmware Version: 25.01 00:16:15.493 Recommended Arb Burst: 0 00:16:15.493 IEEE OUI Identifier: 00 00 00 00:16:15.493 Multi-path I/O 00:16:15.493 May have multiple subsystem ports: No 00:16:15.493 May have multiple controllers: No 00:16:15.493 Associated with SR-IOV VF: No 00:16:15.493 Max Data Transfer Size: 131072 00:16:15.493 Max Number of Namespaces: 0 00:16:15.493 Max Number of I/O Queues: 1024 00:16:15.493 NVMe Specification Version (VS): 1.3 00:16:15.493 NVMe Specification Version (Identify): 1.3 00:16:15.493 Maximum Queue Entries: 128 00:16:15.493 Contiguous Queues Required: Yes 00:16:15.493 Arbitration Mechanisms Supported 00:16:15.493 Weighted Round Robin: Not Supported 00:16:15.493 Vendor Specific: Not Supported 00:16:15.493 Reset Timeout: 15000 ms 00:16:15.493 Doorbell Stride: 4 bytes 00:16:15.493 NVM Subsystem Reset: Not Supported 00:16:15.493 Command Sets Supported 00:16:15.493 NVM Command Set: Supported 00:16:15.493 Boot Partition: Not Supported 00:16:15.493 Memory Page Size Minimum: 4096 bytes 00:16:15.493 Memory Page Size Maximum: 4096 bytes 00:16:15.493 Persistent Memory Region: Not Supported 00:16:15.493 Optional Asynchronous Events Supported 00:16:15.493 Namespace Attribute Notices: Not Supported 00:16:15.493 Firmware Activation Notices: Not Supported 00:16:15.493 ANA Change Notices: Not Supported 00:16:15.493 PLE Aggregate Log Change Notices: Not Supported 00:16:15.493 LBA Status Info Alert Notices: Not Supported 00:16:15.493 EGE Aggregate Log Change Notices: Not Supported 00:16:15.493 Normal NVM Subsystem Shutdown event: Not Supported 00:16:15.493 Zone Descriptor Change Notices: Not Supported 00:16:15.493 Discovery Log Change Notices: Supported 00:16:15.493 Controller Attributes 00:16:15.493 128-bit Host Identifier: Not Supported 00:16:15.493 Non-Operational Permissive Mode: Not Supported 00:16:15.493 NVM Sets: Not Supported 00:16:15.493 Read Recovery Levels: Not Supported 00:16:15.493 Endurance Groups: Not Supported 00:16:15.493 Predictable Latency Mode: Not Supported 00:16:15.493 Traffic Based Keep ALive: Not Supported 00:16:15.493 Namespace Granularity: Not Supported 00:16:15.494 SQ Associations: Not Supported 00:16:15.494 UUID List: Not Supported 00:16:15.494 Multi-Domain Subsystem: Not Supported 00:16:15.494 Fixed Capacity Management: Not Supported 00:16:15.494 Variable Capacity Management: Not Supported 00:16:15.494 Delete Endurance Group: Not Supported 00:16:15.494 Delete NVM Set: Not Supported 00:16:15.494 Extended LBA Formats Supported: Not Supported 00:16:15.494 Flexible Data Placement Supported: Not Supported 00:16:15.494 00:16:15.494 Controller Memory Buffer Support 00:16:15.494 ================================ 00:16:15.494 Supported: No 00:16:15.494 00:16:15.494 Persistent Memory Region Support 00:16:15.494 ================================ 00:16:15.494 Supported: No 00:16:15.494 00:16:15.494 Admin Command Set Attributes 00:16:15.494 ============================ 00:16:15.494 Security Send/Receive: Not Supported 00:16:15.494 Format NVM: Not Supported 00:16:15.494 Firmware Activate/Download: Not Supported 00:16:15.494 Namespace Management: Not Supported 00:16:15.494 Device Self-Test: Not Supported 00:16:15.494 Directives: Not Supported 00:16:15.494 NVMe-MI: Not Supported 00:16:15.494 Virtualization Management: Not Supported 00:16:15.494 Doorbell Buffer Config: Not Supported 00:16:15.494 Get LBA Status Capability: Not Supported 00:16:15.494 Command & Feature Lockdown Capability: Not Supported 00:16:15.494 Abort Command Limit: 1 00:16:15.494 Async Event Request Limit: 4 00:16:15.494 Number of Firmware Slots: N/A 00:16:15.494 Firmware Slot 1 Read-Only: N/A 00:16:15.494 Firmware Activation Without Reset: N/A 00:16:15.494 Multiple Update Detection Support: N/A 00:16:15.494 Firmware Update Granularity: No Information Provided 00:16:15.494 Per-Namespace SMART Log: No 00:16:15.494 Asymmetric Namespace Access Log Page: Not Supported 00:16:15.494 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:15.494 Command Effects Log Page: Not Supported 00:16:15.494 Get Log Page Extended Data: Supported 00:16:15.494 Telemetry Log Pages: Not Supported 00:16:15.494 Persistent Event Log Pages: Not Supported 00:16:15.494 Supported Log Pages Log Page: May Support 00:16:15.494 Commands Supported & Effects Log Page: Not Supported 00:16:15.494 Feature Identifiers & Effects Log Page:May Support 00:16:15.494 NVMe-MI Commands & Effects Log Page: May Support 00:16:15.494 Data Area 4 for Telemetry Log: Not Supported 00:16:15.494 Error Log Page Entries Supported: 128 00:16:15.494 Keep Alive: Not Supported 00:16:15.494 00:16:15.494 NVM Command Set Attributes 00:16:15.494 ========================== 00:16:15.494 Submission Queue Entry Size 00:16:15.494 Max: 1 00:16:15.494 Min: 1 00:16:15.494 Completion Queue Entry Size 00:16:15.494 Max: 1 00:16:15.494 Min: 1 00:16:15.494 Number of Namespaces: 0 00:16:15.494 Compare Command: Not Supported 00:16:15.494 Write Uncorrectable Command: Not Supported 00:16:15.494 Dataset Management Command: Not Supported 00:16:15.494 Write Zeroes Command: Not Supported 00:16:15.494 Set Features Save Field: Not Supported 00:16:15.494 Reservations: Not Supported 00:16:15.494 Timestamp: Not Supported 00:16:15.494 Copy: Not Supported 00:16:15.494 Volatile Write Cache: Not Present 00:16:15.494 Atomic Write Unit (Normal): 1 00:16:15.494 Atomic Write Unit (PFail): 1 00:16:15.494 Atomic Compare & Write Unit: 1 00:16:15.494 Fused Compare & Write: Supported 00:16:15.494 Scatter-Gather List 00:16:15.494 SGL Command Set: Supported 00:16:15.494 SGL Keyed: Supported 00:16:15.494 SGL Bit Bucket Descriptor: Not Supported 00:16:15.494 SGL Metadata Pointer: Not Supported 00:16:15.494 Oversized SGL: Not Supported 00:16:15.494 SGL Metadata Address: Not Supported 00:16:15.494 SGL Offset: Supported 00:16:15.494 Transport SGL Data Block: Not Supported 00:16:15.494 Replay Protected Memory Block: Not Supported 00:16:15.494 00:16:15.494 Firmware Slot Information 00:16:15.494 ========================= 00:16:15.494 Active slot: 0 00:16:15.494 00:16:15.494 00:16:15.494 Error Log 00:16:15.494 ========= 00:16:15.494 00:16:15.494 Active Namespaces 00:16:15.494 ================= 00:16:15.494 Discovery Log Page 00:16:15.494 ================== 00:16:15.494 Generation Counter: 2 00:16:15.494 Number of Records: 2 00:16:15.494 Record Format: 0 00:16:15.494 00:16:15.494 Discovery Log Entry 0 00:16:15.494 ---------------------- 00:16:15.494 Transport Type: 3 (TCP) 00:16:15.494 Address Family: 1 (IPv4) 00:16:15.494 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:15.494 Entry Flags: 00:16:15.494 Duplicate Returned Information: 1 00:16:15.494 Explicit Persistent Connection Support for Discovery: 1 00:16:15.494 Transport Requirements: 00:16:15.494 Secure Channel: Not Required 00:16:15.494 Port ID: 0 (0x0000) 00:16:15.494 Controller ID: 65535 (0xffff) 00:16:15.494 Admin Max SQ Size: 128 00:16:15.494 Transport Service Identifier: 4420 00:16:15.494 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:15.494 Transport Address: 10.0.0.3 00:16:15.494 Discovery Log Entry 1 00:16:15.494 ---------------------- 00:16:15.494 Transport Type: 3 (TCP) 00:16:15.494 Address Family: 1 (IPv4) 00:16:15.494 Subsystem Type: 2 (NVM Subsystem) 00:16:15.494 Entry Flags: 00:16:15.494 Duplicate Returned Information: 0 00:16:15.494 Explicit Persistent Connection Support for Discovery: 0 00:16:15.494 Transport Requirements: 00:16:15.494 Secure Channel: Not Required 00:16:15.494 Port ID: 0 (0x0000) 00:16:15.494 Controller ID: 65535 (0xffff) 00:16:15.494 Admin Max SQ Size: 128 00:16:15.494 Transport Service Identifier: 4420 00:16:15.494 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:15.494 Transport Address: 10.0.0.3 [2024-11-15 10:39:40.982189] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:16:15.494 [2024-11-15 10:39:40.982207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8740) on tqpair=0xd54750 00:16:15.494 [2024-11-15 10:39:40.982214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.494 [2024-11-15 10:39:40.982220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb88c0) on tqpair=0xd54750 00:16:15.494 [2024-11-15 10:39:40.982226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.494 [2024-11-15 10:39:40.982231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8a40) on tqpair=0xd54750 00:16:15.494 [2024-11-15 10:39:40.982236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.494 [2024-11-15 10:39:40.982241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8bc0) on tqpair=0xd54750 00:16:15.494 [2024-11-15 10:39:40.982246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.495 [2024-11-15 10:39:40.982257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd54750) 00:16:15.495 [2024-11-15 10:39:40.982274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.495 [2024-11-15 10:39:40.982299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8bc0, cid 3, qid 0 00:16:15.495 [2024-11-15 10:39:40.982355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.495 [2024-11-15 10:39:40.982362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.495 [2024-11-15 10:39:40.982367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8bc0) on tqpair=0xd54750 00:16:15.495 [2024-11-15 10:39:40.982379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd54750) 00:16:15.495 [2024-11-15 10:39:40.982395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.495 [2024-11-15 10:39:40.982418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8bc0, cid 3, qid 0 00:16:15.495 [2024-11-15 10:39:40.982488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.495 [2024-11-15 10:39:40.982495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.495 [2024-11-15 10:39:40.982499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8bc0) on tqpair=0xd54750 00:16:15.495 [2024-11-15 10:39:40.982508] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:16:15.495 [2024-11-15 10:39:40.982532] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:16:15.495 [2024-11-15 10:39:40.982544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd54750) 00:16:15.495 [2024-11-15 10:39:40.982561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.495 [2024-11-15 10:39:40.982582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8bc0, cid 3, qid 0 00:16:15.495 [2024-11-15 10:39:40.982630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.495 [2024-11-15 10:39:40.982637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.495 [2024-11-15 10:39:40.982641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8bc0) on tqpair=0xd54750 00:16:15.495 [2024-11-15 10:39:40.982657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd54750) 00:16:15.495 [2024-11-15 10:39:40.982673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.495 [2024-11-15 10:39:40.982690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8bc0, cid 3, qid 0 00:16:15.495 [2024-11-15 10:39:40.982738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.495 [2024-11-15 10:39:40.982760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.495 [2024-11-15 10:39:40.982765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8bc0) on tqpair=0xd54750 00:16:15.495 [2024-11-15 10:39:40.982782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd54750) 00:16:15.495 [2024-11-15 10:39:40.982798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.495 [2024-11-15 10:39:40.982818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8bc0, cid 3, qid 0 00:16:15.495 [2024-11-15 10:39:40.982862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.495 [2024-11-15 10:39:40.982873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.495 [2024-11-15 10:39:40.982878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8bc0) on tqpair=0xd54750 00:16:15.495 [2024-11-15 10:39:40.982893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd54750) 00:16:15.495 [2024-11-15 10:39:40.982909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.495 [2024-11-15 10:39:40.982927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8bc0, cid 3, qid 0 00:16:15.495 [2024-11-15 10:39:40.982974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.495 [2024-11-15 10:39:40.982981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.495 [2024-11-15 10:39:40.982984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.982988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8bc0) on tqpair=0xd54750 00:16:15.495 [2024-11-15 10:39:40.982999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.983004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.983008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd54750) 00:16:15.495 [2024-11-15 10:39:40.983016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.495 [2024-11-15 10:39:40.983033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8bc0, cid 3, qid 0 00:16:15.495 [2024-11-15 10:39:40.983083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.495 [2024-11-15 10:39:40.983093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.495 [2024-11-15 10:39:40.983098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.983102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8bc0) on tqpair=0xd54750 00:16:15.495 [2024-11-15 10:39:40.983113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.983117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.983121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd54750) 00:16:15.495 [2024-11-15 10:39:40.983129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.495 [2024-11-15 10:39:40.983147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8bc0, cid 3, qid 0 00:16:15.495 [2024-11-15 10:39:40.983197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.495 [2024-11-15 10:39:40.983204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.495 [2024-11-15 10:39:40.983207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.983212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8bc0) on tqpair=0xd54750 00:16:15.495 [2024-11-15 10:39:40.983222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.983227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.983231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd54750) 00:16:15.495 [2024-11-15 10:39:40.983238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.495 [2024-11-15 10:39:40.983255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8bc0, cid 3, qid 0 00:16:15.495 [2024-11-15 10:39:40.983303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.495 [2024-11-15 10:39:40.983310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.495 [2024-11-15 10:39:40.983314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.983318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8bc0) on tqpair=0xd54750 00:16:15.495 [2024-11-15 10:39:40.983328] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.983333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.495 [2024-11-15 10:39:40.983337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd54750) 00:16:15.496 [2024-11-15 10:39:40.983344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.496 [2024-11-15 10:39:40.983361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8bc0, cid 3, qid 0 00:16:15.496 [2024-11-15 10:39:40.983414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.496 [2024-11-15 10:39:40.983420] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.496 [2024-11-15 10:39:40.983424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.496 [2024-11-15 10:39:40.983428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8bc0) on tqpair=0xd54750 00:16:15.496 [2024-11-15 10:39:40.983438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.496 [2024-11-15 10:39:40.983443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.496 [2024-11-15 10:39:40.983447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd54750) 00:16:15.496 [2024-11-15 10:39:40.983454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.496 [2024-11-15 10:39:40.983472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8bc0, cid 3, qid 0 00:16:15.758 [2024-11-15 10:39:40.987531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.758 [2024-11-15 10:39:40.987551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.758 [2024-11-15 10:39:40.987557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:40.987562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8bc0) on tqpair=0xd54750 00:16:15.758 [2024-11-15 10:39:40.987576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:40.987581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:40.987585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd54750) 00:16:15.758 [2024-11-15 10:39:40.987594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.758 [2024-11-15 10:39:40.987619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb8bc0, cid 3, qid 0 00:16:15.758 [2024-11-15 10:39:40.987671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.758 [2024-11-15 10:39:40.987678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.758 [2024-11-15 10:39:40.987682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:40.987686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb8bc0) on tqpair=0xd54750 00:16:15.758 [2024-11-15 10:39:40.987695] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:16:15.758 00:16:15.758 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:15.758 [2024-11-15 10:39:41.028774] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:16:15.758 [2024-11-15 10:39:41.028835] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74347 ] 00:16:15.758 [2024-11-15 10:39:41.188316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:16:15.758 [2024-11-15 10:39:41.188385] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:15.758 [2024-11-15 10:39:41.188393] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:15.758 [2024-11-15 10:39:41.188408] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:15.758 [2024-11-15 10:39:41.188420] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:15.758 [2024-11-15 10:39:41.188790] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:16:15.758 [2024-11-15 10:39:41.188864] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1539750 0 00:16:15.758 [2024-11-15 10:39:41.203542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:15.758 [2024-11-15 10:39:41.203569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:15.758 [2024-11-15 10:39:41.203576] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:15.758 [2024-11-15 10:39:41.203580] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:15.758 [2024-11-15 10:39:41.203615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.203623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.203628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1539750) 00:16:15.758 [2024-11-15 10:39:41.203645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:15.758 [2024-11-15 10:39:41.203676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159d740, cid 0, qid 0 00:16:15.758 [2024-11-15 10:39:41.211539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.758 [2024-11-15 10:39:41.211563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.758 [2024-11-15 10:39:41.211569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.211574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159d740) on tqpair=0x1539750 00:16:15.758 [2024-11-15 10:39:41.211587] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:15.758 [2024-11-15 10:39:41.211597] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:16:15.758 [2024-11-15 10:39:41.211604] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:16:15.758 [2024-11-15 10:39:41.211623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.211630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.211634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1539750) 00:16:15.758 [2024-11-15 10:39:41.211648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.758 [2024-11-15 10:39:41.211678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159d740, cid 0, qid 0 00:16:15.758 [2024-11-15 10:39:41.211735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.758 [2024-11-15 10:39:41.211743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.758 [2024-11-15 10:39:41.211747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.211751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159d740) on tqpair=0x1539750 00:16:15.758 [2024-11-15 10:39:41.211757] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:16:15.758 [2024-11-15 10:39:41.211765] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:16:15.758 [2024-11-15 10:39:41.211773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.211778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.211782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1539750) 00:16:15.758 [2024-11-15 10:39:41.211790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.758 [2024-11-15 10:39:41.211809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159d740, cid 0, qid 0 00:16:15.758 [2024-11-15 10:39:41.212184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.758 [2024-11-15 10:39:41.212200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.758 [2024-11-15 10:39:41.212204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.212209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159d740) on tqpair=0x1539750 00:16:15.758 [2024-11-15 10:39:41.212215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:16:15.758 [2024-11-15 10:39:41.212226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:15.758 [2024-11-15 10:39:41.212234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.212239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.212243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1539750) 00:16:15.758 [2024-11-15 10:39:41.212251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.758 [2024-11-15 10:39:41.212271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159d740, cid 0, qid 0 00:16:15.758 [2024-11-15 10:39:41.212328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.758 [2024-11-15 10:39:41.212335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.758 [2024-11-15 10:39:41.212339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.212343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159d740) on tqpair=0x1539750 00:16:15.758 [2024-11-15 10:39:41.212350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:15.758 [2024-11-15 10:39:41.212360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.212365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.758 [2024-11-15 10:39:41.212369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1539750) 00:16:15.759 [2024-11-15 10:39:41.212377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.759 [2024-11-15 10:39:41.212395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159d740, cid 0, qid 0 00:16:15.759 [2024-11-15 10:39:41.212711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.759 [2024-11-15 10:39:41.212719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.759 [2024-11-15 10:39:41.212723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.212727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159d740) on tqpair=0x1539750 00:16:15.759 [2024-11-15 10:39:41.212733] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:15.759 [2024-11-15 10:39:41.212739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:15.759 [2024-11-15 10:39:41.212747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:15.759 [2024-11-15 10:39:41.212860] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:16:15.759 [2024-11-15 10:39:41.212866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:15.759 [2024-11-15 10:39:41.212877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.212882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.212886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1539750) 00:16:15.759 [2024-11-15 10:39:41.212894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.759 [2024-11-15 10:39:41.212916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159d740, cid 0, qid 0 00:16:15.759 [2024-11-15 10:39:41.213275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.759 [2024-11-15 10:39:41.213291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.759 [2024-11-15 10:39:41.213296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.213300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159d740) on tqpair=0x1539750 00:16:15.759 [2024-11-15 10:39:41.213306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:15.759 [2024-11-15 10:39:41.213318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.213323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.213327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1539750) 00:16:15.759 [2024-11-15 10:39:41.213334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.759 [2024-11-15 10:39:41.213354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159d740, cid 0, qid 0 00:16:15.759 [2024-11-15 10:39:41.213405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.759 [2024-11-15 10:39:41.213412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.759 [2024-11-15 10:39:41.213416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.213420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159d740) on tqpair=0x1539750 00:16:15.759 [2024-11-15 10:39:41.213425] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:15.759 [2024-11-15 10:39:41.213430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:15.759 [2024-11-15 10:39:41.213439] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:16:15.759 [2024-11-15 10:39:41.213456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:15.759 [2024-11-15 10:39:41.213467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.213473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1539750) 00:16:15.759 [2024-11-15 10:39:41.213481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.759 [2024-11-15 10:39:41.213499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159d740, cid 0, qid 0 00:16:15.759 [2024-11-15 10:39:41.214123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.759 [2024-11-15 10:39:41.214139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.759 [2024-11-15 10:39:41.214145] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214149] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1539750): datao=0, datal=4096, cccid=0 00:16:15.759 [2024-11-15 10:39:41.214154] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159d740) on tqpair(0x1539750): expected_datao=0, payload_size=4096 00:16:15.759 [2024-11-15 10:39:41.214160] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214170] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214175] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.759 [2024-11-15 10:39:41.214191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.759 [2024-11-15 10:39:41.214195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159d740) on tqpair=0x1539750 00:16:15.759 [2024-11-15 10:39:41.214209] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:16:15.759 [2024-11-15 10:39:41.214215] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:16:15.759 [2024-11-15 10:39:41.214220] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:16:15.759 [2024-11-15 10:39:41.214225] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:16:15.759 [2024-11-15 10:39:41.214230] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:16:15.759 [2024-11-15 10:39:41.214236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:16:15.759 [2024-11-15 10:39:41.214250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:15.759 [2024-11-15 10:39:41.214259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1539750) 00:16:15.759 [2024-11-15 10:39:41.214276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.759 [2024-11-15 10:39:41.214300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159d740, cid 0, qid 0 00:16:15.759 [2024-11-15 10:39:41.214719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.759 [2024-11-15 10:39:41.214736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.759 [2024-11-15 10:39:41.214741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159d740) on tqpair=0x1539750 00:16:15.759 [2024-11-15 10:39:41.214754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1539750) 00:16:15.759 [2024-11-15 10:39:41.214770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.759 [2024-11-15 10:39:41.214778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1539750) 00:16:15.759 [2024-11-15 10:39:41.214792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.759 [2024-11-15 10:39:41.214799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1539750) 00:16:15.759 [2024-11-15 10:39:41.214813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.759 [2024-11-15 10:39:41.214820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.759 [2024-11-15 10:39:41.214834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.759 [2024-11-15 10:39:41.214839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:15.759 [2024-11-15 10:39:41.214854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:15.759 [2024-11-15 10:39:41.214863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.214867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1539750) 00:16:15.759 [2024-11-15 10:39:41.214874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.759 [2024-11-15 10:39:41.214898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159d740, cid 0, qid 0 00:16:15.759 [2024-11-15 10:39:41.214905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159d8c0, cid 1, qid 0 00:16:15.759 [2024-11-15 10:39:41.214910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159da40, cid 2, qid 0 00:16:15.759 [2024-11-15 10:39:41.214915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.759 [2024-11-15 10:39:41.214920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dd40, cid 4, qid 0 00:16:15.759 [2024-11-15 10:39:41.215373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.759 [2024-11-15 10:39:41.215387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.759 [2024-11-15 10:39:41.215392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.759 [2024-11-15 10:39:41.215396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dd40) on tqpair=0x1539750 00:16:15.759 [2024-11-15 10:39:41.215402] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:16:15.759 [2024-11-15 10:39:41.215408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.215418] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.215429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.215437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.215442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.215446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1539750) 00:16:15.760 [2024-11-15 10:39:41.215453] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.760 [2024-11-15 10:39:41.215474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dd40, cid 4, qid 0 00:16:15.760 [2024-11-15 10:39:41.219530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.760 [2024-11-15 10:39:41.219550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.760 [2024-11-15 10:39:41.219555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.219560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dd40) on tqpair=0x1539750 00:16:15.760 [2024-11-15 10:39:41.219635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.219649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.219658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.219663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1539750) 00:16:15.760 [2024-11-15 10:39:41.219672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.760 [2024-11-15 10:39:41.219698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dd40, cid 4, qid 0 00:16:15.760 [2024-11-15 10:39:41.219826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.760 [2024-11-15 10:39:41.219833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.760 [2024-11-15 10:39:41.219837] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.219841] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1539750): datao=0, datal=4096, cccid=4 00:16:15.760 [2024-11-15 10:39:41.219846] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159dd40) on tqpair(0x1539750): expected_datao=0, payload_size=4096 00:16:15.760 [2024-11-15 10:39:41.219851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.219860] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.219864] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.219893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.760 [2024-11-15 10:39:41.219900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.760 [2024-11-15 10:39:41.219904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.219908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dd40) on tqpair=0x1539750 00:16:15.760 [2024-11-15 10:39:41.219926] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:16:15.760 [2024-11-15 10:39:41.219937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.219949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.219957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.219961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1539750) 00:16:15.760 [2024-11-15 10:39:41.219969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.760 [2024-11-15 10:39:41.219990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dd40, cid 4, qid 0 00:16:15.760 [2024-11-15 10:39:41.220401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.760 [2024-11-15 10:39:41.220417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.760 [2024-11-15 10:39:41.220422] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.220426] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1539750): datao=0, datal=4096, cccid=4 00:16:15.760 [2024-11-15 10:39:41.220431] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159dd40) on tqpair(0x1539750): expected_datao=0, payload_size=4096 00:16:15.760 [2024-11-15 10:39:41.220436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.220443] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.220447] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.220457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.760 [2024-11-15 10:39:41.220463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.760 [2024-11-15 10:39:41.220467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.220471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dd40) on tqpair=0x1539750 00:16:15.760 [2024-11-15 10:39:41.220492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.220504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.220527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.220533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1539750) 00:16:15.760 [2024-11-15 10:39:41.220541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.760 [2024-11-15 10:39:41.220564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dd40, cid 4, qid 0 00:16:15.760 [2024-11-15 10:39:41.220777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.760 [2024-11-15 10:39:41.220789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.760 [2024-11-15 10:39:41.220793] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.220797] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1539750): datao=0, datal=4096, cccid=4 00:16:15.760 [2024-11-15 10:39:41.220802] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159dd40) on tqpair(0x1539750): expected_datao=0, payload_size=4096 00:16:15.760 [2024-11-15 10:39:41.220807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.220814] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.220819] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.220936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.760 [2024-11-15 10:39:41.220944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.760 [2024-11-15 10:39:41.220947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.220951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dd40) on tqpair=0x1539750 00:16:15.760 [2024-11-15 10:39:41.220961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.220970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.220982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.220988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.220995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.221000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.221006] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:16:15.760 [2024-11-15 10:39:41.221011] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:16:15.760 [2024-11-15 10:39:41.221017] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:16:15.760 [2024-11-15 10:39:41.221038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.221043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1539750) 00:16:15.760 [2024-11-15 10:39:41.221051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.760 [2024-11-15 10:39:41.221060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.221064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.221068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1539750) 00:16:15.760 [2024-11-15 10:39:41.221074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.760 [2024-11-15 10:39:41.221102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dd40, cid 4, qid 0 00:16:15.760 [2024-11-15 10:39:41.221110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dec0, cid 5, qid 0 00:16:15.760 [2024-11-15 10:39:41.221435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.760 [2024-11-15 10:39:41.221451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.760 [2024-11-15 10:39:41.221456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.221461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dd40) on tqpair=0x1539750 00:16:15.760 [2024-11-15 10:39:41.221468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.760 [2024-11-15 10:39:41.221474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.760 [2024-11-15 10:39:41.221478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.221482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dec0) on tqpair=0x1539750 00:16:15.760 [2024-11-15 10:39:41.221494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.221499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1539750) 00:16:15.760 [2024-11-15 10:39:41.221507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.760 [2024-11-15 10:39:41.221539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dec0, cid 5, qid 0 00:16:15.760 [2024-11-15 10:39:41.221595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.760 [2024-11-15 10:39:41.221612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.760 [2024-11-15 10:39:41.221616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.760 [2024-11-15 10:39:41.221621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dec0) on tqpair=0x1539750 00:16:15.761 [2024-11-15 10:39:41.221633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.221637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1539750) 00:16:15.761 [2024-11-15 10:39:41.221645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.761 [2024-11-15 10:39:41.221664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dec0, cid 5, qid 0 00:16:15.761 [2024-11-15 10:39:41.221871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.761 [2024-11-15 10:39:41.221880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.761 [2024-11-15 10:39:41.221884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.221888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dec0) on tqpair=0x1539750 00:16:15.761 [2024-11-15 10:39:41.221899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.221904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1539750) 00:16:15.761 [2024-11-15 10:39:41.221911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.761 [2024-11-15 10:39:41.221929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dec0, cid 5, qid 0 00:16:15.761 [2024-11-15 10:39:41.222272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.761 [2024-11-15 10:39:41.222287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.761 [2024-11-15 10:39:41.222292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dec0) on tqpair=0x1539750 00:16:15.761 [2024-11-15 10:39:41.222318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1539750) 00:16:15.761 [2024-11-15 10:39:41.222332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.761 [2024-11-15 10:39:41.222340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1539750) 00:16:15.761 [2024-11-15 10:39:41.222351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.761 [2024-11-15 10:39:41.222360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1539750) 00:16:15.761 [2024-11-15 10:39:41.222371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.761 [2024-11-15 10:39:41.222380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1539750) 00:16:15.761 [2024-11-15 10:39:41.222390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.761 [2024-11-15 10:39:41.222411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dec0, cid 5, qid 0 00:16:15.761 [2024-11-15 10:39:41.222419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dd40, cid 4, qid 0 00:16:15.761 [2024-11-15 10:39:41.222424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159e040, cid 6, qid 0 00:16:15.761 [2024-11-15 10:39:41.222429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159e1c0, cid 7, qid 0 00:16:15.761 [2024-11-15 10:39:41.222876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.761 [2024-11-15 10:39:41.222892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.761 [2024-11-15 10:39:41.222897] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222901] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1539750): datao=0, datal=8192, cccid=5 00:16:15.761 [2024-11-15 10:39:41.222906] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159dec0) on tqpair(0x1539750): expected_datao=0, payload_size=8192 00:16:15.761 [2024-11-15 10:39:41.222911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222930] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222936] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.761 [2024-11-15 10:39:41.222948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.761 [2024-11-15 10:39:41.222952] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222956] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1539750): datao=0, datal=512, cccid=4 00:16:15.761 [2024-11-15 10:39:41.222961] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159dd40) on tqpair(0x1539750): expected_datao=0, payload_size=512 00:16:15.761 [2024-11-15 10:39:41.222965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222972] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222976] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.761 [2024-11-15 10:39:41.222987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.761 [2024-11-15 10:39:41.222991] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.222995] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1539750): datao=0, datal=512, cccid=6 00:16:15.761 [2024-11-15 10:39:41.222999] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159e040) on tqpair(0x1539750): expected_datao=0, payload_size=512 00:16:15.761 [2024-11-15 10:39:41.223004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.223010] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.223014] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.223020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.761 [2024-11-15 10:39:41.223026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.761 [2024-11-15 10:39:41.223029] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.223033] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1539750): datao=0, datal=4096, cccid=7 00:16:15.761 [2024-11-15 10:39:41.223038] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159e1c0) on tqpair(0x1539750): expected_datao=0, payload_size=4096 00:16:15.761 [2024-11-15 10:39:41.223042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.223049] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.223053] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.223059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.761 [2024-11-15 10:39:41.223065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.761 [2024-11-15 10:39:41.223069] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.761 [2024-11-15 10:39:41.223074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dec0) on tqpair=0x1539750 00:16:15.761 [2024-11-15 10:39:41.223090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.761 [2024-11-15 10:39:41.223098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.761 ===================================================== 00:16:15.761 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:15.761 ===================================================== 00:16:15.761 Controller Capabilities/Features 00:16:15.761 ================================ 00:16:15.761 Vendor ID: 8086 00:16:15.761 Subsystem Vendor ID: 8086 00:16:15.761 Serial Number: SPDK00000000000001 00:16:15.761 Model Number: SPDK bdev Controller 00:16:15.761 Firmware Version: 25.01 00:16:15.761 Recommended Arb Burst: 6 00:16:15.761 IEEE OUI Identifier: e4 d2 5c 00:16:15.761 Multi-path I/O 00:16:15.761 May have multiple subsystem ports: Yes 00:16:15.761 May have multiple controllers: Yes 00:16:15.761 Associated with SR-IOV VF: No 00:16:15.761 Max Data Transfer Size: 131072 00:16:15.761 Max Number of Namespaces: 32 00:16:15.761 Max Number of I/O Queues: 127 00:16:15.761 NVMe Specification Version (VS): 1.3 00:16:15.761 NVMe Specification Version (Identify): 1.3 00:16:15.761 Maximum Queue Entries: 128 00:16:15.761 Contiguous Queues Required: Yes 00:16:15.761 Arbitration Mechanisms Supported 00:16:15.761 Weighted Round Robin: Not Supported 00:16:15.761 Vendor Specific: Not Supported 00:16:15.761 Reset Timeout: 15000 ms 00:16:15.761 Doorbell Stride: 4 bytes 00:16:15.761 NVM Subsystem Reset: Not Supported 00:16:15.761 Command Sets Supported 00:16:15.761 NVM Command Set: Supported 00:16:15.761 Boot Partition: Not Supported 00:16:15.761 Memory Page Size Minimum: 4096 bytes 00:16:15.761 Memory Page Size Maximum: 4096 bytes 00:16:15.761 Persistent Memory Region: Not Supported 00:16:15.761 Optional Asynchronous Events Supported 00:16:15.761 Namespace Attribute Notices: Supported 00:16:15.761 Firmware Activation Notices: Not Supported 00:16:15.761 ANA Change Notices: Not Supported 00:16:15.761 PLE Aggregate Log Change Notices: Not Supported 00:16:15.761 LBA Status Info Alert Notices: Not Supported 00:16:15.761 EGE Aggregate Log Change Notices: Not Supported 00:16:15.761 Normal NVM Subsystem Shutdown event: Not Supported 00:16:15.761 Zone Descriptor Change Notices: Not Supported 00:16:15.761 Discovery Log Change Notices: Not Supported 00:16:15.761 Controller Attributes 00:16:15.761 128-bit Host Identifier: Supported 00:16:15.761 Non-Operational Permissive Mode: Not Supported 00:16:15.761 NVM Sets: Not Supported 00:16:15.761 Read Recovery Levels: Not Supported 00:16:15.761 Endurance Groups: Not Supported 00:16:15.761 Predictable Latency Mode: Not Supported 00:16:15.761 Traffic Based Keep ALive: Not Supported 00:16:15.761 Namespace Granularity: Not Supported 00:16:15.761 SQ Associations: Not Supported 00:16:15.761 UUID List: Not Supported 00:16:15.761 Multi-Domain Subsystem: Not Supported 00:16:15.761 Fixed Capacity Management: Not Supported 00:16:15.761 Variable Capacity Management: Not Supported 00:16:15.761 Delete Endurance Group: Not Supported 00:16:15.761 Delete NVM Set: Not Supported 00:16:15.762 Extended LBA Formats Supported: Not Supported 00:16:15.762 Flexible Data Placement Supported: Not Supported 00:16:15.762 00:16:15.762 Controller Memory Buffer Support 00:16:15.762 ================================ 00:16:15.762 Supported: No 00:16:15.762 00:16:15.762 Persistent Memory Region Support 00:16:15.762 ================================ 00:16:15.762 Supported: No 00:16:15.762 00:16:15.762 Admin Command Set Attributes 00:16:15.762 ============================ 00:16:15.762 Security Send/Receive: Not Supported 00:16:15.762 Format NVM: Not Supported 00:16:15.762 Firmware Activate/Download: Not Supported 00:16:15.762 Namespace Management: Not Supported 00:16:15.762 Device Self-Test: Not Supported 00:16:15.762 Directives: Not Supported 00:16:15.762 NVMe-MI: Not Supported 00:16:15.762 Virtualization Management: Not Supported 00:16:15.762 Doorbell Buffer Config: Not Supported 00:16:15.762 Get LBA Status Capability: Not Supported 00:16:15.762 Command & Feature Lockdown Capability: Not Supported 00:16:15.762 Abort Command Limit: 4 00:16:15.762 Async Event Request Limit: 4 00:16:15.762 Number of Firmware Slots: N/A 00:16:15.762 Firmware Slot 1 Read-Only: N/A 00:16:15.762 Firmware Activation Without Reset: N/A 00:16:15.762 Multiple Update Detection Support: N/A 00:16:15.762 Firmware Update Granularity: No Information Provided 00:16:15.762 Per-Namespace SMART Log: No 00:16:15.762 Asymmetric Namespace Access Log Page: Not Supported 00:16:15.762 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:15.762 Command Effects Log Page: Supported 00:16:15.762 Get Log Page Extended Data: Supported 00:16:15.762 Telemetry Log Pages: Not Supported 00:16:15.762 Persistent Event Log Pages: Not Supported 00:16:15.762 Supported Log Pages Log Page: May Support 00:16:15.762 Commands Supported & Effects Log Page: Not Supported 00:16:15.762 Feature Identifiers & Effects Log Page:May Support 00:16:15.762 NVMe-MI Commands & Effects Log Page: May Support 00:16:15.762 Data Area 4 for Telemetry Log: Not Supported 00:16:15.762 Error Log Page Entries Supported: 128 00:16:15.762 Keep Alive: Supported 00:16:15.762 Keep Alive Granularity: 10000 ms 00:16:15.762 00:16:15.762 NVM Command Set Attributes 00:16:15.762 ========================== 00:16:15.762 Submission Queue Entry Size 00:16:15.762 Max: 64 00:16:15.762 Min: 64 00:16:15.762 Completion Queue Entry Size 00:16:15.762 Max: 16 00:16:15.762 Min: 16 00:16:15.762 Number of Namespaces: 32 00:16:15.762 Compare Command: Supported 00:16:15.762 Write Uncorrectable Command: Not Supported 00:16:15.762 Dataset Management Command: Supported 00:16:15.762 Write Zeroes Command: Supported 00:16:15.762 Set Features Save Field: Not Supported 00:16:15.762 Reservations: Supported 00:16:15.762 Timestamp: Not Supported 00:16:15.762 Copy: Supported 00:16:15.762 Volatile Write Cache: Present 00:16:15.762 Atomic Write Unit (Normal): 1 00:16:15.762 Atomic Write Unit (PFail): 1 00:16:15.762 Atomic Compare & Write Unit: 1 00:16:15.762 Fused Compare & Write: Supported 00:16:15.762 Scatter-Gather List 00:16:15.762 SGL Command Set: Supported 00:16:15.762 SGL Keyed: Supported 00:16:15.762 SGL Bit Bucket Descriptor: Not Supported 00:16:15.762 SGL Metadata Pointer: Not Supported 00:16:15.762 Oversized SGL: Not Supported 00:16:15.762 SGL Metadata Address: Not Supported 00:16:15.762 SGL Offset: Supported 00:16:15.762 Transport SGL Data Block: Not Supported 00:16:15.762 Replay Protected Memory Block: Not Supported 00:16:15.762 00:16:15.762 Firmware Slot Information 00:16:15.762 ========================= 00:16:15.762 Active slot: 1 00:16:15.762 Slot 1 Firmware Revision: 25.01 00:16:15.762 00:16:15.762 00:16:15.762 Commands Supported and Effects 00:16:15.762 ============================== 00:16:15.762 Admin Commands 00:16:15.762 -------------- 00:16:15.762 Get Log Page (02h): Supported 00:16:15.762 Identify (06h): Supported 00:16:15.762 Abort (08h): Supported 00:16:15.762 Set Features (09h): Supported 00:16:15.762 Get Features (0Ah): Supported 00:16:15.762 Asynchronous Event Request (0Ch): Supported 00:16:15.762 Keep Alive (18h): Supported 00:16:15.762 I/O Commands 00:16:15.762 ------------ 00:16:15.762 Flush (00h): Supported LBA-Change 00:16:15.762 Write (01h): Supported LBA-Change 00:16:15.762 Read (02h): Supported 00:16:15.762 Compare (05h): Supported 00:16:15.762 Write Zeroes (08h): Supported LBA-Change 00:16:15.762 Dataset Management (09h): Supported LBA-Change 00:16:15.762 Copy (19h): Supported LBA-Change 00:16:15.762 00:16:15.762 Error Log 00:16:15.762 ========= 00:16:15.762 00:16:15.762 Arbitration 00:16:15.762 =========== 00:16:15.762 Arbitration Burst: 1 00:16:15.762 00:16:15.762 Power Management 00:16:15.762 ================ 00:16:15.762 Number of Power States: 1 00:16:15.762 Current Power State: Power State #0 00:16:15.762 Power State #0: 00:16:15.762 Max Power: 0.00 W 00:16:15.762 Non-Operational State: Operational 00:16:15.762 Entry Latency: Not Reported 00:16:15.762 Exit Latency: Not Reported 00:16:15.762 Relative Read Throughput: 0 00:16:15.762 Relative Read Latency: 0 00:16:15.762 Relative Write Throughput: 0 00:16:15.762 Relative Write Latency: 0 00:16:15.762 Idle Power: Not Reported 00:16:15.762 Active Power: Not Reported 00:16:15.762 Non-Operational Permissive Mode: Not Supported 00:16:15.762 00:16:15.762 Health Information 00:16:15.762 ================== 00:16:15.762 Critical Warnings: 00:16:15.762 Available Spare Space: OK 00:16:15.762 Temperature: OK 00:16:15.762 Device Reliability: OK 00:16:15.762 Read Only: No 00:16:15.762 Volatile Memory Backup: OK 00:16:15.762 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:15.762 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:15.762 Available Spare: 0% 00:16:15.762 Available Spare Threshold: 0% 00:16:15.762 Life Percentage Used:[2024-11-15 10:39:41.223101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.762 [2024-11-15 10:39:41.223106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dd40) on tqpair=0x1539750 00:16:15.762 [2024-11-15 10:39:41.223119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.762 [2024-11-15 10:39:41.223125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.762 [2024-11-15 10:39:41.223129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.762 [2024-11-15 10:39:41.223133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159e040) on tqpair=0x1539750 00:16:15.762 [2024-11-15 10:39:41.223140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.762 [2024-11-15 10:39:41.223147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.762 [2024-11-15 10:39:41.223150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.762 [2024-11-15 10:39:41.223154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159e1c0) on tqpair=0x1539750 00:16:15.762 [2024-11-15 10:39:41.223263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.762 [2024-11-15 10:39:41.223270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1539750) 00:16:15.762 [2024-11-15 10:39:41.223286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.762 [2024-11-15 10:39:41.223310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159e1c0, cid 7, qid 0 00:16:15.762 [2024-11-15 10:39:41.227528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.762 [2024-11-15 10:39:41.227549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.762 [2024-11-15 10:39:41.227554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.762 [2024-11-15 10:39:41.227559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159e1c0) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.227606] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:16:15.763 [2024-11-15 10:39:41.227621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159d740) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.227629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.763 [2024-11-15 10:39:41.227635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159d8c0) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.227640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.763 [2024-11-15 10:39:41.227646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159da40) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.227651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.763 [2024-11-15 10:39:41.227656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.227661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.763 [2024-11-15 10:39:41.227672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.227676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.227680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.763 [2024-11-15 10:39:41.227689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.763 [2024-11-15 10:39:41.227716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.763 [2024-11-15 10:39:41.227912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.763 [2024-11-15 10:39:41.227928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.763 [2024-11-15 10:39:41.227933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.227937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.227946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.227950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.227954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.763 [2024-11-15 10:39:41.227962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.763 [2024-11-15 10:39:41.227986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.763 [2024-11-15 10:39:41.228147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.763 [2024-11-15 10:39:41.228161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.763 [2024-11-15 10:39:41.228166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.228171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.228177] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:16:15.763 [2024-11-15 10:39:41.228182] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:16:15.763 [2024-11-15 10:39:41.228193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.228198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.228202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.763 [2024-11-15 10:39:41.228210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.763 [2024-11-15 10:39:41.228229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.763 [2024-11-15 10:39:41.228414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.763 [2024-11-15 10:39:41.228428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.763 [2024-11-15 10:39:41.228433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.228437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.228449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.228455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.228459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.763 [2024-11-15 10:39:41.228466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.763 [2024-11-15 10:39:41.228485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.763 [2024-11-15 10:39:41.228704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.763 [2024-11-15 10:39:41.228719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.763 [2024-11-15 10:39:41.228724] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.228729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.228740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.228745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.228749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.763 [2024-11-15 10:39:41.228757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.763 [2024-11-15 10:39:41.228777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.763 [2024-11-15 10:39:41.228981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.763 [2024-11-15 10:39:41.228995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.763 [2024-11-15 10:39:41.229000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.229016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.763 [2024-11-15 10:39:41.229032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.763 [2024-11-15 10:39:41.229051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.763 [2024-11-15 10:39:41.229254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.763 [2024-11-15 10:39:41.229265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.763 [2024-11-15 10:39:41.229270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.229285] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.763 [2024-11-15 10:39:41.229302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.763 [2024-11-15 10:39:41.229319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.763 [2024-11-15 10:39:41.229547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.763 [2024-11-15 10:39:41.229558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.763 [2024-11-15 10:39:41.229563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.229578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.763 [2024-11-15 10:39:41.229595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.763 [2024-11-15 10:39:41.229630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.763 [2024-11-15 10:39:41.229788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.763 [2024-11-15 10:39:41.229799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.763 [2024-11-15 10:39:41.229803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.229819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.763 [2024-11-15 10:39:41.229836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.763 [2024-11-15 10:39:41.229854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.763 [2024-11-15 10:39:41.229916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.763 [2024-11-15 10:39:41.229927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.763 [2024-11-15 10:39:41.229932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.229947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.229956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.763 [2024-11-15 10:39:41.229963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.763 [2024-11-15 10:39:41.229981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.763 [2024-11-15 10:39:41.230274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.763 [2024-11-15 10:39:41.230285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.763 [2024-11-15 10:39:41.230289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.230293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.763 [2024-11-15 10:39:41.230304] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.230309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.763 [2024-11-15 10:39:41.230313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.763 [2024-11-15 10:39:41.230321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.764 [2024-11-15 10:39:41.230339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.764 [2024-11-15 10:39:41.230578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.764 [2024-11-15 10:39:41.230589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.764 [2024-11-15 10:39:41.230594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.230598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.764 [2024-11-15 10:39:41.230609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.230614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.230618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.764 [2024-11-15 10:39:41.230626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.764 [2024-11-15 10:39:41.230645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.764 [2024-11-15 10:39:41.230869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.764 [2024-11-15 10:39:41.230880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.764 [2024-11-15 10:39:41.230884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.230888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.764 [2024-11-15 10:39:41.230899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.230904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.230908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.764 [2024-11-15 10:39:41.230916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.764 [2024-11-15 10:39:41.230934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.764 [2024-11-15 10:39:41.231151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.764 [2024-11-15 10:39:41.231162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.764 [2024-11-15 10:39:41.231166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.231171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.764 [2024-11-15 10:39:41.231182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.231187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.231191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.764 [2024-11-15 10:39:41.231199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.764 [2024-11-15 10:39:41.231216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.764 [2024-11-15 10:39:41.231446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.764 [2024-11-15 10:39:41.231457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.764 [2024-11-15 10:39:41.231461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.231466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.764 [2024-11-15 10:39:41.231477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.231482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.231486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.764 [2024-11-15 10:39:41.231493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.764 [2024-11-15 10:39:41.235522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.764 [2024-11-15 10:39:41.235549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.764 [2024-11-15 10:39:41.235557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.764 [2024-11-15 10:39:41.235562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.235566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.764 [2024-11-15 10:39:41.235580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.235586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.235590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1539750) 00:16:15.764 [2024-11-15 10:39:41.235598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.764 [2024-11-15 10:39:41.235623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159dbc0, cid 3, qid 0 00:16:15.764 [2024-11-15 10:39:41.235676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.764 [2024-11-15 10:39:41.235683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.764 [2024-11-15 10:39:41.235687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.764 [2024-11-15 10:39:41.235691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159dbc0) on tqpair=0x1539750 00:16:15.764 [2024-11-15 10:39:41.235700] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:16:16.022 0% 00:16:16.022 Data Units Read: 0 00:16:16.022 Data Units Written: 0 00:16:16.022 Host Read Commands: 0 00:16:16.022 Host Write Commands: 0 00:16:16.022 Controller Busy Time: 0 minutes 00:16:16.022 Power Cycles: 0 00:16:16.022 Power On Hours: 0 hours 00:16:16.022 Unsafe Shutdowns: 0 00:16:16.022 Unrecoverable Media Errors: 0 00:16:16.022 Lifetime Error Log Entries: 0 00:16:16.022 Warning Temperature Time: 0 minutes 00:16:16.022 Critical Temperature Time: 0 minutes 00:16:16.022 00:16:16.022 Number of Queues 00:16:16.022 ================ 00:16:16.022 Number of I/O Submission Queues: 127 00:16:16.022 Number of I/O Completion Queues: 127 00:16:16.022 00:16:16.022 Active Namespaces 00:16:16.022 ================= 00:16:16.022 Namespace ID:1 00:16:16.022 Error Recovery Timeout: Unlimited 00:16:16.022 Command Set Identifier: NVM (00h) 00:16:16.022 Deallocate: Supported 00:16:16.022 Deallocated/Unwritten Error: Not Supported 00:16:16.022 Deallocated Read Value: Unknown 00:16:16.022 Deallocate in Write Zeroes: Not Supported 00:16:16.022 Deallocated Guard Field: 0xFFFF 00:16:16.022 Flush: Supported 00:16:16.022 Reservation: Supported 00:16:16.022 Namespace Sharing Capabilities: Multiple Controllers 00:16:16.022 Size (in LBAs): 131072 (0GiB) 00:16:16.022 Capacity (in LBAs): 131072 (0GiB) 00:16:16.022 Utilization (in LBAs): 131072 (0GiB) 00:16:16.022 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:16.022 EUI64: ABCDEF0123456789 00:16:16.022 UUID: 2d1507ea-f353-4fe2-98ab-77104a4119ab 00:16:16.022 Thin Provisioning: Not Supported 00:16:16.022 Per-NS Atomic Units: Yes 00:16:16.022 Atomic Boundary Size (Normal): 0 00:16:16.022 Atomic Boundary Size (PFail): 0 00:16:16.022 Atomic Boundary Offset: 0 00:16:16.022 Maximum Single Source Range Length: 65535 00:16:16.022 Maximum Copy Length: 65535 00:16:16.022 Maximum Source Range Count: 1 00:16:16.022 NGUID/EUI64 Never Reused: No 00:16:16.022 Namespace Write Protected: No 00:16:16.022 Number of LBA Formats: 1 00:16:16.022 Current LBA Format: LBA Format #00 00:16:16.022 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:16.022 00:16:16.022 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:16.022 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:16.023 rmmod nvme_tcp 00:16:16.023 rmmod nvme_fabrics 00:16:16.023 rmmod nvme_keyring 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74314 ']' 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74314 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 74314 ']' 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 74314 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74314 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:16.023 killing process with pid 74314 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74314' 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 74314 00:16:16.023 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 74314 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:16.281 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:16.539 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.539 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.539 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:16:16.540 ************************************ 00:16:16.540 END TEST nvmf_identify 00:16:16.540 ************************************ 00:16:16.540 00:16:16.540 real 0m2.270s 00:16:16.540 user 0m4.788s 00:16:16.540 sys 0m0.734s 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.540 ************************************ 00:16:16.540 START TEST nvmf_perf 00:16:16.540 ************************************ 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:16.540 * Looking for test storage... 00:16:16.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:16.540 10:39:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:16.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.799 --rc genhtml_branch_coverage=1 00:16:16.799 --rc genhtml_function_coverage=1 00:16:16.799 --rc genhtml_legend=1 00:16:16.799 --rc geninfo_all_blocks=1 00:16:16.799 --rc geninfo_unexecuted_blocks=1 00:16:16.799 00:16:16.799 ' 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:16.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.799 --rc genhtml_branch_coverage=1 00:16:16.799 --rc genhtml_function_coverage=1 00:16:16.799 --rc genhtml_legend=1 00:16:16.799 --rc geninfo_all_blocks=1 00:16:16.799 --rc geninfo_unexecuted_blocks=1 00:16:16.799 00:16:16.799 ' 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:16.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.799 --rc genhtml_branch_coverage=1 00:16:16.799 --rc genhtml_function_coverage=1 00:16:16.799 --rc genhtml_legend=1 00:16:16.799 --rc geninfo_all_blocks=1 00:16:16.799 --rc geninfo_unexecuted_blocks=1 00:16:16.799 00:16:16.799 ' 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:16.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.799 --rc genhtml_branch_coverage=1 00:16:16.799 --rc genhtml_function_coverage=1 00:16:16.799 --rc genhtml_legend=1 00:16:16.799 --rc geninfo_all_blocks=1 00:16:16.799 --rc geninfo_unexecuted_blocks=1 00:16:16.799 00:16:16.799 ' 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:16.799 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:16.800 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:16.800 Cannot find device "nvmf_init_br" 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:16.800 Cannot find device "nvmf_init_br2" 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:16.800 Cannot find device "nvmf_tgt_br" 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.800 Cannot find device "nvmf_tgt_br2" 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:16.800 Cannot find device "nvmf_init_br" 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:16.800 Cannot find device "nvmf_init_br2" 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:16.800 Cannot find device "nvmf_tgt_br" 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:16.800 Cannot find device "nvmf_tgt_br2" 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:16.800 Cannot find device "nvmf_br" 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:16.800 Cannot find device "nvmf_init_if" 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:16.800 Cannot find device "nvmf_init_if2" 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.800 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:16:16.801 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:16.801 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:16.801 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:17.059 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:17.060 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:17.060 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:17.060 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:17.060 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:17.060 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:17.060 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:16:17.060 00:16:17.060 --- 10.0.0.3 ping statistics --- 00:16:17.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.060 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:17.060 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:17.060 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:17.060 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:16:17.060 00:16:17.060 --- 10.0.0.4 ping statistics --- 00:16:17.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.060 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:17.060 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:17.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:17.060 00:16:17.060 --- 10.0.0.1 ping statistics --- 00:16:17.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.060 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:17.060 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:17.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:17.060 00:16:17.060 --- 10.0.0.2 ping statistics --- 00:16:17.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.060 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74562 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74562 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 74562 ']' 00:16:17.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:17.318 10:39:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:17.318 [2024-11-15 10:39:42.645417] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:16:17.318 [2024-11-15 10:39:42.645531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.318 [2024-11-15 10:39:42.801105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.590 [2024-11-15 10:39:42.873150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.590 [2024-11-15 10:39:42.873406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.590 [2024-11-15 10:39:42.873741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.590 [2024-11-15 10:39:42.873884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.590 [2024-11-15 10:39:42.874101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.590 [2024-11-15 10:39:42.875418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.590 [2024-11-15 10:39:42.875557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.590 [2024-11-15 10:39:42.875561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.590 [2024-11-15 10:39:42.875505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.590 [2024-11-15 10:39:42.932768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:17.590 10:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:17.590 10:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:16:17.590 10:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:17.590 10:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:17.590 10:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:17.590 10:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.590 10:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:17.590 10:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:18.166 10:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:18.166 10:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:18.423 10:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:18.423 10:39:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:18.681 10:39:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:18.681 10:39:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:18.681 10:39:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:18.681 10:39:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:18.681 10:39:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:18.939 [2024-11-15 10:39:44.304218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.939 10:39:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:19.197 10:39:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:19.197 10:39:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:19.454 10:39:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:19.454 10:39:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:19.712 10:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:19.970 [2024-11-15 10:39:45.425644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:19.970 10:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:20.555 10:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:20.555 10:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:20.555 10:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:20.555 10:39:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:21.520 Initializing NVMe Controllers 00:16:21.520 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:21.520 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:21.520 Initialization complete. Launching workers. 00:16:21.520 ======================================================== 00:16:21.520 Latency(us) 00:16:21.520 Device Information : IOPS MiB/s Average min max 00:16:21.520 PCIE (0000:00:10.0) NSID 1 from core 0: 24451.78 95.51 1308.13 286.75 7635.34 00:16:21.520 ======================================================== 00:16:21.520 Total : 24451.78 95.51 1308.13 286.75 7635.34 00:16:21.520 00:16:21.520 10:39:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:22.895 Initializing NVMe Controllers 00:16:22.895 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:22.895 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:22.895 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:22.895 Initialization complete. Launching workers. 00:16:22.895 ======================================================== 00:16:22.895 Latency(us) 00:16:22.895 Device Information : IOPS MiB/s Average min max 00:16:22.895 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3641.35 14.22 274.30 107.84 4371.98 00:16:22.895 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.50 0.49 8095.36 4993.62 12021.58 00:16:22.895 ======================================================== 00:16:22.895 Total : 3765.85 14.71 532.86 107.84 12021.58 00:16:22.895 00:16:22.895 10:39:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:24.265 Initializing NVMe Controllers 00:16:24.265 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:24.265 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:24.265 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:24.265 Initialization complete. Launching workers. 00:16:24.265 ======================================================== 00:16:24.265 Latency(us) 00:16:24.265 Device Information : IOPS MiB/s Average min max 00:16:24.266 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8528.95 33.32 3752.49 529.90 8128.76 00:16:24.266 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4021.53 15.71 7969.91 5836.40 9568.16 00:16:24.266 ======================================================== 00:16:24.266 Total : 12550.48 49.03 5103.87 529.90 9568.16 00:16:24.266 00:16:24.266 10:39:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:24.266 10:39:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:26.794 Initializing NVMe Controllers 00:16:26.794 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:26.794 Controller IO queue size 128, less than required. 00:16:26.794 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:26.794 Controller IO queue size 128, less than required. 00:16:26.794 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:26.794 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:26.794 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:26.794 Initialization complete. Launching workers. 00:16:26.794 ======================================================== 00:16:26.794 Latency(us) 00:16:26.794 Device Information : IOPS MiB/s Average min max 00:16:26.794 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1596.25 399.06 81592.69 41906.28 142566.88 00:16:26.794 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 652.40 163.10 203715.24 69923.91 327053.73 00:16:26.794 ======================================================== 00:16:26.794 Total : 2248.65 562.16 117023.98 41906.28 327053.73 00:16:26.794 00:16:26.794 10:39:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:16:27.054 Initializing NVMe Controllers 00:16:27.054 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:27.054 Controller IO queue size 128, less than required. 00:16:27.054 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:27.054 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:27.054 Controller IO queue size 128, less than required. 00:16:27.054 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:27.054 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:27.054 WARNING: Some requested NVMe devices were skipped 00:16:27.054 No valid NVMe controllers or AIO or URING devices found 00:16:27.054 10:39:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:16:29.584 Initializing NVMe Controllers 00:16:29.584 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:29.584 Controller IO queue size 128, less than required. 00:16:29.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:29.584 Controller IO queue size 128, less than required. 00:16:29.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:29.584 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:29.584 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:29.584 Initialization complete. Launching workers. 00:16:29.584 00:16:29.584 ==================== 00:16:29.584 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:29.584 TCP transport: 00:16:29.584 polls: 9798 00:16:29.584 idle_polls: 6302 00:16:29.585 sock_completions: 3496 00:16:29.585 nvme_completions: 6283 00:16:29.585 submitted_requests: 9336 00:16:29.585 queued_requests: 1 00:16:29.585 00:16:29.585 ==================== 00:16:29.585 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:29.585 TCP transport: 00:16:29.585 polls: 12355 00:16:29.585 idle_polls: 7774 00:16:29.585 sock_completions: 4581 00:16:29.585 nvme_completions: 7085 00:16:29.585 submitted_requests: 10626 00:16:29.585 queued_requests: 1 00:16:29.585 ======================================================== 00:16:29.585 Latency(us) 00:16:29.585 Device Information : IOPS MiB/s Average min max 00:16:29.585 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1570.42 392.61 82775.44 41028.33 138109.67 00:16:29.585 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1770.91 442.73 72464.05 39380.50 117270.06 00:16:29.585 ======================================================== 00:16:29.585 Total : 3341.33 835.33 77310.39 39380.50 138109.67 00:16:29.585 00:16:29.585 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:29.842 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:30.157 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:30.157 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:30.157 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:30.157 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:30.157 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:16:30.157 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:30.157 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:16:30.157 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:30.157 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:30.157 rmmod nvme_tcp 00:16:30.158 rmmod nvme_fabrics 00:16:30.158 rmmod nvme_keyring 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74562 ']' 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74562 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 74562 ']' 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 74562 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74562 00:16:30.158 killing process with pid 74562 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74562' 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 74562 00:16:30.158 10:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 74562 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:16:31.088 ************************************ 00:16:31.088 END TEST nvmf_perf 00:16:31.088 ************************************ 00:16:31.088 00:16:31.088 real 0m14.554s 00:16:31.088 user 0m52.341s 00:16:31.088 sys 0m4.035s 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.088 ************************************ 00:16:31.088 START TEST nvmf_fio_host 00:16:31.088 ************************************ 00:16:31.088 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:31.347 * Looking for test storage... 00:16:31.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:31.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.347 --rc genhtml_branch_coverage=1 00:16:31.347 --rc genhtml_function_coverage=1 00:16:31.347 --rc genhtml_legend=1 00:16:31.347 --rc geninfo_all_blocks=1 00:16:31.347 --rc geninfo_unexecuted_blocks=1 00:16:31.347 00:16:31.347 ' 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:31.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.347 --rc genhtml_branch_coverage=1 00:16:31.347 --rc genhtml_function_coverage=1 00:16:31.347 --rc genhtml_legend=1 00:16:31.347 --rc geninfo_all_blocks=1 00:16:31.347 --rc geninfo_unexecuted_blocks=1 00:16:31.347 00:16:31.347 ' 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:31.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.347 --rc genhtml_branch_coverage=1 00:16:31.347 --rc genhtml_function_coverage=1 00:16:31.347 --rc genhtml_legend=1 00:16:31.347 --rc geninfo_all_blocks=1 00:16:31.347 --rc geninfo_unexecuted_blocks=1 00:16:31.347 00:16:31.347 ' 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:31.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.347 --rc genhtml_branch_coverage=1 00:16:31.347 --rc genhtml_function_coverage=1 00:16:31.347 --rc genhtml_legend=1 00:16:31.347 --rc geninfo_all_blocks=1 00:16:31.347 --rc geninfo_unexecuted_blocks=1 00:16:31.347 00:16:31.347 ' 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.347 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:31.348 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:31.348 Cannot find device "nvmf_init_br" 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:31.348 Cannot find device "nvmf_init_br2" 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:31.348 Cannot find device "nvmf_tgt_br" 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:31.348 Cannot find device "nvmf_tgt_br2" 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:31.348 Cannot find device "nvmf_init_br" 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:31.348 Cannot find device "nvmf_init_br2" 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:31.348 Cannot find device "nvmf_tgt_br" 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:31.348 Cannot find device "nvmf_tgt_br2" 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:31.348 Cannot find device "nvmf_br" 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:16:31.348 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:31.607 Cannot find device "nvmf_init_if" 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:31.607 Cannot find device "nvmf_init_if2" 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:31.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:31.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:31.607 10:39:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:31.607 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:31.865 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:31.865 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:16:31.865 00:16:31.865 --- 10.0.0.3 ping statistics --- 00:16:31.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.865 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:31.865 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:31.865 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:16:31.865 00:16:31.865 --- 10.0.0.4 ping statistics --- 00:16:31.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.865 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:31.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:16:31.865 00:16:31.865 --- 10.0.0.1 ping statistics --- 00:16:31.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.865 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:31.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:16:31.865 00:16:31.865 --- 10.0.0.2 ping statistics --- 00:16:31.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.865 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75013 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75013 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 75013 ']' 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.865 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:31.866 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.866 [2024-11-15 10:39:57.217974] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:16:31.866 [2024-11-15 10:39:57.218706] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.123 [2024-11-15 10:39:57.376990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:32.124 [2024-11-15 10:39:57.441284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.124 [2024-11-15 10:39:57.441347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.124 [2024-11-15 10:39:57.441362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.124 [2024-11-15 10:39:57.441373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.124 [2024-11-15 10:39:57.441382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.124 [2024-11-15 10:39:57.442598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.124 [2024-11-15 10:39:57.442739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.124 [2024-11-15 10:39:57.442866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.124 [2024-11-15 10:39:57.442875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.124 [2024-11-15 10:39:57.519894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:32.124 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:32.124 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:16:32.124 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:32.699 [2024-11-15 10:39:57.898062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.699 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:32.699 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:32.699 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.699 10:39:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:32.988 Malloc1 00:16:32.988 10:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:33.247 10:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.505 10:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:33.764 [2024-11-15 10:39:59.133592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:33.764 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:34.022 10:39:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:34.281 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:34.281 fio-3.35 00:16:34.281 Starting 1 thread 00:16:36.809 00:16:36.809 test: (groupid=0, jobs=1): err= 0: pid=75094: Fri Nov 15 10:40:01 2024 00:16:36.809 read: IOPS=8892, BW=34.7MiB/s (36.4MB/s)(69.7MiB/2006msec) 00:16:36.809 slat (nsec): min=1986, max=198355, avg=2622.58, stdev=2372.36 00:16:36.809 clat (usec): min=1856, max=13827, avg=7483.33, stdev=608.61 00:16:36.809 lat (usec): min=1892, max=13829, avg=7485.95, stdev=608.36 00:16:36.809 clat percentiles (usec): 00:16:36.809 | 1.00th=[ 6194], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7111], 00:16:36.809 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7439], 60.00th=[ 7570], 00:16:36.809 | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:16:36.809 | 99.00th=[ 8979], 99.50th=[11207], 99.90th=[12387], 99.95th=[13304], 00:16:36.809 | 99.99th=[13566] 00:16:36.809 bw ( KiB/s): min=34746, max=36128, per=99.84%, avg=35514.50, stdev=601.84, samples=4 00:16:36.809 iops : min= 8686, max= 9032, avg=8878.50, stdev=150.67, samples=4 00:16:36.809 write: IOPS=8905, BW=34.8MiB/s (36.5MB/s)(69.8MiB/2006msec); 0 zone resets 00:16:36.809 slat (usec): min=2, max=139, avg= 2.70, stdev= 1.44 00:16:36.809 clat (usec): min=1651, max=12636, avg=6840.19, stdev=541.44 00:16:36.809 lat (usec): min=1665, max=12638, avg=6842.89, stdev=541.30 00:16:36.809 clat percentiles (usec): 00:16:36.809 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6521], 00:16:36.809 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6915], 00:16:36.809 | 70.00th=[ 7046], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7504], 00:16:36.809 | 99.00th=[ 8094], 99.50th=[ 9110], 99.90th=[11469], 99.95th=[11731], 00:16:36.809 | 99.99th=[12649] 00:16:36.809 bw ( KiB/s): min=35488, max=35776, per=99.94%, avg=35600.00, stdev=123.94, samples=4 00:16:36.809 iops : min= 8872, max= 8944, avg=8900.00, stdev=30.98, samples=4 00:16:36.809 lat (msec) : 2=0.03%, 4=0.13%, 10=99.31%, 20=0.53% 00:16:36.809 cpu : usr=69.83%, sys=22.64%, ctx=8, majf=0, minf=7 00:16:36.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:36.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.809 issued rwts: total=17838,17864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.809 00:16:36.809 Run status group 0 (all jobs): 00:16:36.809 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.7MiB (73.1MB), run=2006-2006msec 00:16:36.809 WRITE: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.8MiB (73.2MB), run=2006-2006msec 00:16:36.809 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:36.809 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:36.809 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:36.809 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:36.810 10:40:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:36.810 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:36.810 fio-3.35 00:16:36.810 Starting 1 thread 00:16:39.443 00:16:39.443 test: (groupid=0, jobs=1): err= 0: pid=75137: Fri Nov 15 10:40:04 2024 00:16:39.443 read: IOPS=8194, BW=128MiB/s (134MB/s)(257MiB/2004msec) 00:16:39.443 slat (usec): min=3, max=135, avg= 3.69, stdev= 1.84 00:16:39.443 clat (usec): min=1812, max=17135, avg=8645.14, stdev=2448.77 00:16:39.443 lat (usec): min=1816, max=17139, avg=8648.83, stdev=2448.84 00:16:39.443 clat percentiles (usec): 00:16:39.443 | 1.00th=[ 4146], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6456], 00:16:39.443 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9110], 00:16:39.443 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11600], 95.00th=[13042], 00:16:39.443 | 99.00th=[15664], 99.50th=[16188], 99.90th=[16909], 99.95th=[16909], 00:16:39.443 | 99.99th=[17171] 00:16:39.443 bw ( KiB/s): min=60672, max=75680, per=52.07%, avg=68272.00, stdev=7012.72, samples=4 00:16:39.444 iops : min= 3792, max= 4730, avg=4267.00, stdev=438.30, samples=4 00:16:39.444 write: IOPS=4894, BW=76.5MiB/s (80.2MB/s)(140MiB/1827msec); 0 zone resets 00:16:39.444 slat (usec): min=34, max=360, avg=37.72, stdev= 7.00 00:16:39.444 clat (usec): min=3500, max=21232, avg=11975.22, stdev=2175.91 00:16:39.444 lat (usec): min=3548, max=21269, avg=12012.94, stdev=2175.60 00:16:39.444 clat percentiles (usec): 00:16:39.444 | 1.00th=[ 7439], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10159], 00:16:39.444 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11863], 60.00th=[12387], 00:16:39.444 | 70.00th=[13042], 80.00th=[13698], 90.00th=[14615], 95.00th=[15664], 00:16:39.444 | 99.00th=[18482], 99.50th=[19006], 99.90th=[20055], 99.95th=[20841], 00:16:39.444 | 99.99th=[21103] 00:16:39.444 bw ( KiB/s): min=62624, max=79104, per=90.77%, avg=71088.00, stdev=7410.68, samples=4 00:16:39.444 iops : min= 3914, max= 4944, avg=4443.00, stdev=463.17, samples=4 00:16:39.444 lat (msec) : 2=0.02%, 4=0.43%, 10=52.02%, 20=47.49%, 50=0.05% 00:16:39.444 cpu : usr=83.62%, sys=12.73%, ctx=3, majf=0, minf=12 00:16:39.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:39.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:39.444 issued rwts: total=16421,8943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:39.444 00:16:39.444 Run status group 0 (all jobs): 00:16:39.444 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (269MB), run=2004-2004msec 00:16:39.444 WRITE: bw=76.5MiB/s (80.2MB/s), 76.5MiB/s-76.5MiB/s (80.2MB/s-80.2MB/s), io=140MiB (147MB), run=1827-1827msec 00:16:39.444 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:39.444 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:39.444 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:39.444 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:39.444 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:39.444 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:39.444 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:16:39.444 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:39.445 rmmod nvme_tcp 00:16:39.445 rmmod nvme_fabrics 00:16:39.445 rmmod nvme_keyring 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 75013 ']' 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 75013 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 75013 ']' 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 75013 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75013 00:16:39.445 killing process with pid 75013 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75013' 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 75013 00:16:39.445 10:40:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 75013 00:16:39.706 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:39.706 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:39.706 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:39.706 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:16:39.706 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:16:39.706 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:16:39.706 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:39.706 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:39.706 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:39.706 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:39.706 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:39.706 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:16:39.964 00:16:39.964 real 0m8.887s 00:16:39.964 user 0m35.223s 00:16:39.964 sys 0m2.419s 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.964 ************************************ 00:16:39.964 END TEST nvmf_fio_host 00:16:39.964 ************************************ 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.964 ************************************ 00:16:39.964 START TEST nvmf_failover 00:16:39.964 ************************************ 00:16:39.964 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:40.224 * Looking for test storage... 00:16:40.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:40.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.224 --rc genhtml_branch_coverage=1 00:16:40.224 --rc genhtml_function_coverage=1 00:16:40.224 --rc genhtml_legend=1 00:16:40.224 --rc geninfo_all_blocks=1 00:16:40.224 --rc geninfo_unexecuted_blocks=1 00:16:40.224 00:16:40.224 ' 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:40.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.224 --rc genhtml_branch_coverage=1 00:16:40.224 --rc genhtml_function_coverage=1 00:16:40.224 --rc genhtml_legend=1 00:16:40.224 --rc geninfo_all_blocks=1 00:16:40.224 --rc geninfo_unexecuted_blocks=1 00:16:40.224 00:16:40.224 ' 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:40.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.224 --rc genhtml_branch_coverage=1 00:16:40.224 --rc genhtml_function_coverage=1 00:16:40.224 --rc genhtml_legend=1 00:16:40.224 --rc geninfo_all_blocks=1 00:16:40.224 --rc geninfo_unexecuted_blocks=1 00:16:40.224 00:16:40.224 ' 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:40.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.224 --rc genhtml_branch_coverage=1 00:16:40.224 --rc genhtml_function_coverage=1 00:16:40.224 --rc genhtml_legend=1 00:16:40.224 --rc geninfo_all_blocks=1 00:16:40.224 --rc geninfo_unexecuted_blocks=1 00:16:40.224 00:16:40.224 ' 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.224 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:40.225 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:40.225 Cannot find device "nvmf_init_br" 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:40.225 Cannot find device "nvmf_init_br2" 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:40.225 Cannot find device "nvmf_tgt_br" 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:40.225 Cannot find device "nvmf_tgt_br2" 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:40.225 Cannot find device "nvmf_init_br" 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:40.225 Cannot find device "nvmf_init_br2" 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:16:40.225 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:40.483 Cannot find device "nvmf_tgt_br" 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:40.483 Cannot find device "nvmf_tgt_br2" 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:40.483 Cannot find device "nvmf_br" 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:40.483 Cannot find device "nvmf_init_if" 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:40.483 Cannot find device "nvmf_init_if2" 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:40.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:40.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:40.483 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:40.484 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:40.484 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:40.484 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:40.484 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:40.484 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:40.484 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:40.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:40.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:16:40.484 00:16:40.484 --- 10.0.0.3 ping statistics --- 00:16:40.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.484 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:40.484 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:40.484 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:40.484 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:16:40.484 00:16:40.484 --- 10.0.0.4 ping statistics --- 00:16:40.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.484 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:40.484 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:40.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:40.813 00:16:40.813 --- 10.0.0.1 ping statistics --- 00:16:40.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.813 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:40.813 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:40.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:16:40.813 00:16:40.813 --- 10.0.0.2 ping statistics --- 00:16:40.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.813 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:40.813 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.813 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:16:40.813 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:40.813 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.813 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:40.813 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:40.813 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.813 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:40.813 10:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:40.813 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:40.813 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:40.813 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:40.813 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:40.813 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75418 00:16:40.813 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:40.813 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75418 00:16:40.813 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75418 ']' 00:16:40.813 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.813 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:40.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.813 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.813 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:40.813 10:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:40.813 [2024-11-15 10:40:06.069988] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:16:40.813 [2024-11-15 10:40:06.070078] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.813 [2024-11-15 10:40:06.220293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:41.088 [2024-11-15 10:40:06.291465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.088 [2024-11-15 10:40:06.291543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.088 [2024-11-15 10:40:06.291560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.088 [2024-11-15 10:40:06.291571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.088 [2024-11-15 10:40:06.291580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.088 [2024-11-15 10:40:06.292808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.088 [2024-11-15 10:40:06.292923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.088 [2024-11-15 10:40:06.292929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.088 [2024-11-15 10:40:06.349226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:41.654 10:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:41.654 10:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:16:41.654 10:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:41.654 10:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:41.654 10:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:41.654 10:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.654 10:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:41.913 [2024-11-15 10:40:07.374879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.913 10:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:42.479 Malloc0 00:16:42.479 10:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:42.737 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:42.995 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:43.254 [2024-11-15 10:40:08.617892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:43.254 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:43.513 [2024-11-15 10:40:08.866095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:43.513 10:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:43.771 [2024-11-15 10:40:09.118465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:43.771 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75481 00:16:43.771 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:43.771 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:43.771 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75481 /var/tmp/bdevperf.sock 00:16:43.771 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75481 ']' 00:16:43.771 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:43.771 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:43.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:43.771 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:43.771 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:43.771 10:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:45.219 10:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:45.219 10:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:16:45.219 10:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:45.219 NVMe0n1 00:16:45.219 10:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:45.785 00:16:45.785 10:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75505 00:16:45.785 10:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:45.785 10:40:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:46.719 10:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:46.978 10:40:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:50.323 10:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:50.323 00:16:50.323 10:40:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:50.581 10:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:53.866 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:54.125 [2024-11-15 10:40:19.375579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:54.125 10:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:55.059 10:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:55.317 [2024-11-15 10:40:20.686174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5660 is same with the state(6) to be set 00:16:55.317 10:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75505 00:17:01.892 { 00:17:01.892 "results": [ 00:17:01.892 { 00:17:01.892 "job": "NVMe0n1", 00:17:01.892 "core_mask": "0x1", 00:17:01.892 "workload": "verify", 00:17:01.892 "status": "finished", 00:17:01.892 "verify_range": { 00:17:01.892 "start": 0, 00:17:01.892 "length": 16384 00:17:01.892 }, 00:17:01.892 "queue_depth": 128, 00:17:01.892 "io_size": 4096, 00:17:01.892 "runtime": 15.008314, 00:17:01.892 "iops": 8897.201910887525, 00:17:01.892 "mibps": 34.75469496440439, 00:17:01.892 "io_failed": 3325, 00:17:01.892 "io_timeout": 0, 00:17:01.892 "avg_latency_us": 14003.601018661151, 00:17:01.892 "min_latency_us": 659.0836363636364, 00:17:01.892 "max_latency_us": 29908.247272727273 00:17:01.892 } 00:17:01.892 ], 00:17:01.892 "core_count": 1 00:17:01.892 } 00:17:01.892 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75481 00:17:01.892 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75481 ']' 00:17:01.892 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75481 00:17:01.892 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:17:01.892 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:01.892 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75481 00:17:01.892 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:01.892 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:01.892 killing process with pid 75481 00:17:01.892 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75481' 00:17:01.892 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75481 00:17:01.892 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75481 00:17:01.892 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:01.892 [2024-11-15 10:40:09.206151] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:17:01.892 [2024-11-15 10:40:09.206302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75481 ] 00:17:01.892 [2024-11-15 10:40:09.360629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.892 [2024-11-15 10:40:09.426904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.892 [2024-11-15 10:40:09.483744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:01.892 Running I/O for 15 seconds... 00:17:01.892 6933.00 IOPS, 27.08 MiB/s [2024-11-15T10:40:27.390Z] [2024-11-15 10:40:12.329905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.892 [2024-11-15 10:40:12.329978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.892 [2024-11-15 10:40:12.330027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.892 [2024-11-15 10:40:12.330059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.892 [2024-11-15 10:40:12.330089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.892 [2024-11-15 10:40:12.330120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.892 [2024-11-15 10:40:12.330151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.892 [2024-11-15 10:40:12.330181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.892 [2024-11-15 10:40:12.330212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.892 [2024-11-15 10:40:12.330241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.892 [2024-11-15 10:40:12.330272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1cb0 is same with the state(6) to be set 00:17:01.892 [2024-11-15 10:40:12.330337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.892 [2024-11-15 10:40:12.330350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.892 [2024-11-15 10:40:12.330361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64672 len:8 PRP1 0x0 PRP2 0x0 00:17:01.892 [2024-11-15 10:40:12.330375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.892 [2024-11-15 10:40:12.330401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.892 [2024-11-15 10:40:12.330411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64800 len:8 PRP1 0x0 PRP2 0x0 00:17:01.892 [2024-11-15 10:40:12.330425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.892 [2024-11-15 10:40:12.330449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.892 [2024-11-15 10:40:12.330459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64808 len:8 PRP1 0x0 PRP2 0x0 00:17:01.892 [2024-11-15 10:40:12.330473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.892 [2024-11-15 10:40:12.330507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.892 [2024-11-15 10:40:12.330533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64816 len:8 PRP1 0x0 PRP2 0x0 00:17:01.892 [2024-11-15 10:40:12.330548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.892 [2024-11-15 10:40:12.330573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.892 [2024-11-15 10:40:12.330584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64824 len:8 PRP1 0x0 PRP2 0x0 00:17:01.892 [2024-11-15 10:40:12.330598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.892 [2024-11-15 10:40:12.330623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.892 [2024-11-15 10:40:12.330633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64832 len:8 PRP1 0x0 PRP2 0x0 00:17:01.892 [2024-11-15 10:40:12.330647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.892 [2024-11-15 10:40:12.330671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.892 [2024-11-15 10:40:12.330682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64840 len:8 PRP1 0x0 PRP2 0x0 00:17:01.892 [2024-11-15 10:40:12.330696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.892 [2024-11-15 10:40:12.330710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.330720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.330731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64848 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.330754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.330769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.330780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.330791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64856 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.330805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.330819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.330829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.330840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64864 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.330853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.330867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.330877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.330888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64872 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.330902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.330921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.330932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.330943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64880 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.330957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.330972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.330982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.330992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64888 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64896 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64904 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64912 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64920 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64928 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64936 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64944 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64952 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64960 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64968 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64976 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64984 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64992 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65000 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65008 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65016 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65024 len:8 PRP1 0x0 PRP2 0x0 00:17:01.893 [2024-11-15 10:40:12.331890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.893 [2024-11-15 10:40:12.331904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.893 [2024-11-15 10:40:12.331914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.893 [2024-11-15 10:40:12.331925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65032 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.331938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.331962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.331974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.331984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65040 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.331999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65048 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65056 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65064 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65072 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65080 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65088 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65096 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65104 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65112 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65120 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65128 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65136 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65144 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65152 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65160 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65168 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65176 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65184 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.332946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.332956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65192 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.332970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.332989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.333000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.333011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65200 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.333025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.333039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.333049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.333060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65208 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.333073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.894 [2024-11-15 10:40:12.333087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.894 [2024-11-15 10:40:12.333097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.894 [2024-11-15 10:40:12.333108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65216 len:8 PRP1 0x0 PRP2 0x0 00:17:01.894 [2024-11-15 10:40:12.333121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65224 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65232 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65240 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65248 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65256 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65264 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65272 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65280 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65288 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65296 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65304 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65312 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65320 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65328 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65336 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65344 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.333960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.333975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.333985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.333996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65352 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.334009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.334023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.334033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.334044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65360 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.334058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.334072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.334082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.334092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65368 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.334106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.334120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.334130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.334141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65376 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.334155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.334169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.334179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.334189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65384 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.334203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.334221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.334232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.334244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65392 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.334258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.334272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.334282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.895 [2024-11-15 10:40:12.334293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65400 len:8 PRP1 0x0 PRP2 0x0 00:17:01.895 [2024-11-15 10:40:12.334306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.895 [2024-11-15 10:40:12.334320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.895 [2024-11-15 10:40:12.334334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.334348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65408 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.334362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.334376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.334386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.334397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65416 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.334410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.334424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.334435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.334446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65424 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.334459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.334473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.334483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.334493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65432 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.334507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.334531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.347313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.347346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65440 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.347364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.347384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.347396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.347407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65448 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.347421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.347435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.347446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.347458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65456 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.347471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.347485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.347495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.347505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65464 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.347532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.347562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.347573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.347583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65472 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.347597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.347611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.347620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.347631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65480 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.347644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.347657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.347667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.347677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65488 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.347690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.347704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.347714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.347724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65496 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.347738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.347751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.347761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.347771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65504 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.347784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.347798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.347808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.347828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65512 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.347841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.347855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.347865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.347876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65520 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.347889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.347903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.347913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.347923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65528 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.347944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.347958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.347969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.347980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.347993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.348007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.348017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.348027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65544 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.348040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.348054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.348063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.348074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65552 len:8 PRP1 0x0 PRP2 0x0 00:17:01.896 [2024-11-15 10:40:12.348087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.896 [2024-11-15 10:40:12.348100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.896 [2024-11-15 10:40:12.348110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.896 [2024-11-15 10:40:12.348120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65560 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65568 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65576 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65584 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65592 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65600 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65608 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65616 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65624 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65632 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65640 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65648 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65656 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65664 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65672 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64680 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64688 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.348957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.348966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.348977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64696 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.348990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.349004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.349013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.349024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64704 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.349037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.349052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.349061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.349072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64712 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.349103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.349118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.349128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.349138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64720 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.349152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.349166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.897 [2024-11-15 10:40:12.349176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.897 [2024-11-15 10:40:12.349186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64728 len:8 PRP1 0x0 PRP2 0x0 00:17:01.897 [2024-11-15 10:40:12.349200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.349266] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:17:01.897 [2024-11-15 10:40:12.349331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.897 [2024-11-15 10:40:12.349353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.349370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.897 [2024-11-15 10:40:12.349384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.897 [2024-11-15 10:40:12.349399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.897 [2024-11-15 10:40:12.349412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:12.349426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.898 [2024-11-15 10:40:12.349440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:12.349454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:01.898 [2024-11-15 10:40:12.349501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2115710 (9): Bad file descriptor 00:17:01.898 [2024-11-15 10:40:12.354907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:01.898 [2024-11-15 10:40:12.378158] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:17:01.898 7648.50 IOPS, 29.88 MiB/s [2024-11-15T10:40:27.396Z] 8149.67 IOPS, 31.83 MiB/s [2024-11-15T10:40:27.396Z] 8374.25 IOPS, 32.71 MiB/s [2024-11-15T10:40:27.396Z] [2024-11-15 10:40:16.032392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.032979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.032995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.898 [2024-11-15 10:40:16.033489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.033532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.033565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.033596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.033637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.898 [2024-11-15 10:40:16.033654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.898 [2024-11-15 10:40:16.033668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.033683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.033698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.033714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.033728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.033744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.033758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.033781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.033797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.033813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.033828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.033843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.033858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.033873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.033887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.033911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.033925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.033941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.033956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.033971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.033986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.899 [2024-11-15 10:40:16.034505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.899 [2024-11-15 10:40:16.034905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.899 [2024-11-15 10:40:16.034920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.034936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.900 [2024-11-15 10:40:16.034950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.034972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.900 [2024-11-15 10:40:16.034987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.900 [2024-11-15 10:40:16.035022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.900 [2024-11-15 10:40:16.035564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.900 [2024-11-15 10:40:16.035595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.900 [2024-11-15 10:40:16.035633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.900 [2024-11-15 10:40:16.035663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.900 [2024-11-15 10:40:16.035693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.900 [2024-11-15 10:40:16.035722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.900 [2024-11-15 10:40:16.035752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.900 [2024-11-15 10:40:16.035789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.035970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.035986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.900 [2024-11-15 10:40:16.036000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.036015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ef930 is same with the state(6) to be set 00:17:01.900 [2024-11-15 10:40:16.036032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.900 [2024-11-15 10:40:16.036042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.900 [2024-11-15 10:40:16.036053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76528 len:8 PRP1 0x0 PRP2 0x0 00:17:01.900 [2024-11-15 10:40:16.036067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.036083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.900 [2024-11-15 10:40:16.036099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.900 [2024-11-15 10:40:16.036110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76984 len:8 PRP1 0x0 PRP2 0x0 00:17:01.900 [2024-11-15 10:40:16.036124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.036138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.900 [2024-11-15 10:40:16.036148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.900 [2024-11-15 10:40:16.036159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76992 len:8 PRP1 0x0 PRP2 0x0 00:17:01.900 [2024-11-15 10:40:16.036195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.900 [2024-11-15 10:40:16.036211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77000 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77008 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77016 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77024 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77032 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77040 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77048 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77056 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77064 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77072 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77080 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77088 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77096 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.901 [2024-11-15 10:40:16.036888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.901 [2024-11-15 10:40:16.036898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77104 len:8 PRP1 0x0 PRP2 0x0 00:17:01.901 [2024-11-15 10:40:16.036913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.036973] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:17:01.901 [2024-11-15 10:40:16.037037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.901 [2024-11-15 10:40:16.037059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.037074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.901 [2024-11-15 10:40:16.037099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.037115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.901 [2024-11-15 10:40:16.037128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.037143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.901 [2024-11-15 10:40:16.037157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:16.037171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:01.901 [2024-11-15 10:40:16.037220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2115710 (9): Bad file descriptor 00:17:01.901 [2024-11-15 10:40:16.041073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:01.901 [2024-11-15 10:40:16.070988] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:17:01.901 8449.00 IOPS, 33.00 MiB/s [2024-11-15T10:40:27.399Z] 8585.17 IOPS, 33.54 MiB/s [2024-11-15T10:40:27.399Z] 8682.14 IOPS, 33.91 MiB/s [2024-11-15T10:40:27.399Z] 8737.88 IOPS, 34.13 MiB/s [2024-11-15T10:40:27.399Z] 8783.00 IOPS, 34.31 MiB/s [2024-11-15T10:40:27.399Z] [2024-11-15 10:40:20.686728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.901 [2024-11-15 10:40:20.686786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:20.686813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.901 [2024-11-15 10:40:20.686829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:20.686846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.901 [2024-11-15 10:40:20.686861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:20.686877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.901 [2024-11-15 10:40:20.686892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.901 [2024-11-15 10:40:20.686908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.686922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.686938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.686952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.686968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.686982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.686998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.687366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.687395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.687433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.687465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.687494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.687541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.687572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.687602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.687852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.687882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.687912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.687942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.687972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.687988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.688002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.688018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.688032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.688048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.688062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.688078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.902 [2024-11-15 10:40:20.688092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.688108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.688122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.688138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.902 [2024-11-15 10:40:20.688152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.902 [2024-11-15 10:40:20.688168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.688844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.688875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.688913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.688943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.688973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.688988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.689003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.689040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.689069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.689099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.689128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.689158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.689188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.689217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.689247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.689277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.689306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.903 [2024-11-15 10:40:20.689338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.689368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.903 [2024-11-15 10:40:20.689404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.903 [2024-11-15 10:40:20.689420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.904 [2024-11-15 10:40:20.689435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.904 [2024-11-15 10:40:20.689464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.904 [2024-11-15 10:40:20.689494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.904 [2024-11-15 10:40:20.689538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.904 [2024-11-15 10:40:20.689568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.904 [2024-11-15 10:40:20.689598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.904 [2024-11-15 10:40:20.689643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.904 [2024-11-15 10:40:20.689674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.904 [2024-11-15 10:40:20.689704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.904 [2024-11-15 10:40:20.689734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.689764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.689795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.689835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.689876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.689908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.689938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.689968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.689984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.689997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.690028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.690057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.690087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.690117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.690147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.690177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.904 [2024-11-15 10:40:20.690206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c1750 is same with the state(6) to be set 00:17:01.904 [2024-11-15 10:40:20.690246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.904 [2024-11-15 10:40:20.690257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.904 [2024-11-15 10:40:20.690268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31904 len:8 PRP1 0x0 PRP2 0x0 00:17:01.904 [2024-11-15 10:40:20.690282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.904 [2024-11-15 10:40:20.690307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.904 [2024-11-15 10:40:20.690318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32392 len:8 PRP1 0x0 PRP2 0x0 00:17:01.904 [2024-11-15 10:40:20.690332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.904 [2024-11-15 10:40:20.690363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.904 [2024-11-15 10:40:20.690373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32400 len:8 PRP1 0x0 PRP2 0x0 00:17:01.904 [2024-11-15 10:40:20.690387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.904 [2024-11-15 10:40:20.690412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.904 [2024-11-15 10:40:20.690422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32408 len:8 PRP1 0x0 PRP2 0x0 00:17:01.904 [2024-11-15 10:40:20.690436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.904 [2024-11-15 10:40:20.690460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.904 [2024-11-15 10:40:20.690471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32416 len:8 PRP1 0x0 PRP2 0x0 00:17:01.904 [2024-11-15 10:40:20.690484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.904 [2024-11-15 10:40:20.690526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.904 [2024-11-15 10:40:20.690538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32424 len:8 PRP1 0x0 PRP2 0x0 00:17:01.904 [2024-11-15 10:40:20.690552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.904 [2024-11-15 10:40:20.690576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.904 [2024-11-15 10:40:20.690587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32432 len:8 PRP1 0x0 PRP2 0x0 00:17:01.904 [2024-11-15 10:40:20.690601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.904 [2024-11-15 10:40:20.690632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.904 [2024-11-15 10:40:20.690644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32440 len:8 PRP1 0x0 PRP2 0x0 00:17:01.904 [2024-11-15 10:40:20.690658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.904 [2024-11-15 10:40:20.690672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.904 [2024-11-15 10:40:20.690682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.904 [2024-11-15 10:40:20.690693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32448 len:8 PRP1 0x0 PRP2 0x0 00:17:01.904 [2024-11-15 10:40:20.690706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.690720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.905 [2024-11-15 10:40:20.690730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.905 [2024-11-15 10:40:20.690741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32456 len:8 PRP1 0x0 PRP2 0x0 00:17:01.905 [2024-11-15 10:40:20.690755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.690774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.905 [2024-11-15 10:40:20.690785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.905 [2024-11-15 10:40:20.690796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32464 len:8 PRP1 0x0 PRP2 0x0 00:17:01.905 [2024-11-15 10:40:20.690810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.690824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.905 [2024-11-15 10:40:20.690834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.905 [2024-11-15 10:40:20.690845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32472 len:8 PRP1 0x0 PRP2 0x0 00:17:01.905 [2024-11-15 10:40:20.690858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.690872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.905 [2024-11-15 10:40:20.690882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.905 [2024-11-15 10:40:20.690893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32480 len:8 PRP1 0x0 PRP2 0x0 00:17:01.905 [2024-11-15 10:40:20.690906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.690920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.905 [2024-11-15 10:40:20.690930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.905 [2024-11-15 10:40:20.690941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32488 len:8 PRP1 0x0 PRP2 0x0 00:17:01.905 [2024-11-15 10:40:20.690954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.690968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.905 [2024-11-15 10:40:20.690978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.905 [2024-11-15 10:40:20.690989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32496 len:8 PRP1 0x0 PRP2 0x0 00:17:01.905 [2024-11-15 10:40:20.691002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.691022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.905 [2024-11-15 10:40:20.691033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.905 [2024-11-15 10:40:20.691043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32504 len:8 PRP1 0x0 PRP2 0x0 00:17:01.905 [2024-11-15 10:40:20.691057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.691071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.905 [2024-11-15 10:40:20.691081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.905 [2024-11-15 10:40:20.691092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32512 len:8 PRP1 0x0 PRP2 0x0 00:17:01.905 [2024-11-15 10:40:20.691105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.691119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.905 [2024-11-15 10:40:20.691129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.905 [2024-11-15 10:40:20.691145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32520 len:8 PRP1 0x0 PRP2 0x0 00:17:01.905 [2024-11-15 10:40:20.691159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.691177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:01.905 [2024-11-15 10:40:20.691188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:01.905 [2024-11-15 10:40:20.691199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32528 len:8 PRP1 0x0 PRP2 0x0 00:17:01.905 [2024-11-15 10:40:20.691212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.691274] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:17:01.905 [2024-11-15 10:40:20.691334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.905 [2024-11-15 10:40:20.691356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.691372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.905 [2024-11-15 10:40:20.691386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.691400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.905 [2024-11-15 10:40:20.691414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.691428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.905 [2024-11-15 10:40:20.691442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.905 [2024-11-15 10:40:20.691456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:17:01.905 [2024-11-15 10:40:20.695292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:17:01.905 [2024-11-15 10:40:20.695333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2115710 (9): Bad file descriptor 00:17:01.905 [2024-11-15 10:40:20.723911] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:17:01.905 8778.00 IOPS, 34.29 MiB/s [2024-11-15T10:40:27.403Z] 8808.36 IOPS, 34.41 MiB/s [2024-11-15T10:40:27.403Z] 8830.33 IOPS, 34.49 MiB/s [2024-11-15T10:40:27.403Z] 8855.38 IOPS, 34.59 MiB/s [2024-11-15T10:40:27.403Z] 8878.00 IOPS, 34.68 MiB/s [2024-11-15T10:40:27.403Z] 8897.27 IOPS, 34.75 MiB/s 00:17:01.905 Latency(us) 00:17:01.905 [2024-11-15T10:40:27.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.905 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:01.905 Verification LBA range: start 0x0 length 0x4000 00:17:01.905 NVMe0n1 : 15.01 8897.20 34.75 221.54 0.00 14003.60 659.08 29908.25 00:17:01.905 [2024-11-15T10:40:27.403Z] =================================================================================================================== 00:17:01.905 [2024-11-15T10:40:27.403Z] Total : 8897.20 34.75 221.54 0.00 14003.60 659.08 29908.25 00:17:01.905 Received shutdown signal, test time was about 15.000000 seconds 00:17:01.905 00:17:01.905 Latency(us) 00:17:01.905 [2024-11-15T10:40:27.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.905 [2024-11-15T10:40:27.403Z] =================================================================================================================== 00:17:01.905 [2024-11-15T10:40:27.403Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.905 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:01.905 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:01.905 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:01.905 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75679 00:17:01.905 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:01.905 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75679 /var/tmp/bdevperf.sock 00:17:01.905 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75679 ']' 00:17:01.905 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:01.905 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:01.905 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:01.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:01.905 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:01.905 10:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:02.196 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:02.196 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:17:02.196 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:02.454 [2024-11-15 10:40:27.850676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:02.454 10:40:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:02.712 [2024-11-15 10:40:28.167000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:17:02.712 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:03.279 NVMe0n1 00:17:03.279 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:03.537 00:17:03.537 10:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:03.796 00:17:03.796 10:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:03.796 10:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:04.054 10:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:04.313 10:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:07.594 10:40:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:07.594 10:40:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:07.852 10:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75767 00:17:07.852 10:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75767 00:17:07.852 10:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:08.787 { 00:17:08.787 "results": [ 00:17:08.787 { 00:17:08.787 "job": "NVMe0n1", 00:17:08.787 "core_mask": "0x1", 00:17:08.787 "workload": "verify", 00:17:08.787 "status": "finished", 00:17:08.787 "verify_range": { 00:17:08.787 "start": 0, 00:17:08.787 "length": 16384 00:17:08.787 }, 00:17:08.787 "queue_depth": 128, 00:17:08.787 "io_size": 4096, 00:17:08.787 "runtime": 1.005188, 00:17:08.787 "iops": 6897.217236974576, 00:17:08.787 "mibps": 26.942254831931937, 00:17:08.787 "io_failed": 0, 00:17:08.787 "io_timeout": 0, 00:17:08.787 "avg_latency_us": 18482.476675714304, 00:17:08.787 "min_latency_us": 2383.1272727272726, 00:17:08.787 "max_latency_us": 15371.17090909091 00:17:08.787 } 00:17:08.787 ], 00:17:08.787 "core_count": 1 00:17:08.787 } 00:17:08.787 10:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:08.787 [2024-11-15 10:40:26.504294] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:17:08.787 [2024-11-15 10:40:26.504426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75679 ] 00:17:08.787 [2024-11-15 10:40:26.658308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.787 [2024-11-15 10:40:26.727987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.787 [2024-11-15 10:40:26.785735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:08.787 [2024-11-15 10:40:29.728797] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:17:08.787 [2024-11-15 10:40:29.728926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.787 [2024-11-15 10:40:29.728953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.787 [2024-11-15 10:40:29.728973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.787 [2024-11-15 10:40:29.728987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.787 [2024-11-15 10:40:29.729002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.787 [2024-11-15 10:40:29.729016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.787 [2024-11-15 10:40:29.729030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.788 [2024-11-15 10:40:29.729043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.788 [2024-11-15 10:40:29.729058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:17:08.788 [2024-11-15 10:40:29.729112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:17:08.788 [2024-11-15 10:40:29.729146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8d710 (9): Bad file descriptor 00:17:08.788 [2024-11-15 10:40:29.737593] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:17:08.788 Running I/O for 1 seconds... 00:17:08.788 6805.00 IOPS, 26.58 MiB/s 00:17:08.788 Latency(us) 00:17:08.788 [2024-11-15T10:40:34.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.788 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:08.788 Verification LBA range: start 0x0 length 0x4000 00:17:08.788 NVMe0n1 : 1.01 6897.22 26.94 0.00 0.00 18482.48 2383.13 15371.17 00:17:08.788 [2024-11-15T10:40:34.286Z] =================================================================================================================== 00:17:08.788 [2024-11-15T10:40:34.286Z] Total : 6897.22 26.94 0.00 0.00 18482.48 2383.13 15371.17 00:17:08.788 10:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:08.788 10:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:09.354 10:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:09.612 10:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:09.612 10:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:09.870 10:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:10.127 10:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:13.410 10:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:13.410 10:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:13.410 10:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75679 00:17:13.410 10:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75679 ']' 00:17:13.410 10:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75679 00:17:13.410 10:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:17:13.410 10:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:13.410 10:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75679 00:17:13.410 killing process with pid 75679 00:17:13.410 10:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:13.410 10:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:13.410 10:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75679' 00:17:13.410 10:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75679 00:17:13.410 10:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75679 00:17:13.668 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:13.668 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:14.235 rmmod nvme_tcp 00:17:14.235 rmmod nvme_fabrics 00:17:14.235 rmmod nvme_keyring 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75418 ']' 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75418 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75418 ']' 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75418 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75418 00:17:14.235 killing process with pid 75418 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75418' 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75418 00:17:14.235 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75418 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.523 10:40:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:17:14.781 00:17:14.781 real 0m34.586s 00:17:14.781 user 2m14.362s 00:17:14.781 sys 0m5.769s 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:14.781 ************************************ 00:17:14.781 END TEST nvmf_failover 00:17:14.781 ************************************ 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.781 ************************************ 00:17:14.781 START TEST nvmf_host_discovery 00:17:14.781 ************************************ 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:14.781 * Looking for test storage... 00:17:14.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:17:14.781 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:15.040 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:15.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.041 --rc genhtml_branch_coverage=1 00:17:15.041 --rc genhtml_function_coverage=1 00:17:15.041 --rc genhtml_legend=1 00:17:15.041 --rc geninfo_all_blocks=1 00:17:15.041 --rc geninfo_unexecuted_blocks=1 00:17:15.041 00:17:15.041 ' 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:15.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.041 --rc genhtml_branch_coverage=1 00:17:15.041 --rc genhtml_function_coverage=1 00:17:15.041 --rc genhtml_legend=1 00:17:15.041 --rc geninfo_all_blocks=1 00:17:15.041 --rc geninfo_unexecuted_blocks=1 00:17:15.041 00:17:15.041 ' 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:15.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.041 --rc genhtml_branch_coverage=1 00:17:15.041 --rc genhtml_function_coverage=1 00:17:15.041 --rc genhtml_legend=1 00:17:15.041 --rc geninfo_all_blocks=1 00:17:15.041 --rc geninfo_unexecuted_blocks=1 00:17:15.041 00:17:15.041 ' 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:15.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.041 --rc genhtml_branch_coverage=1 00:17:15.041 --rc genhtml_function_coverage=1 00:17:15.041 --rc genhtml_legend=1 00:17:15.041 --rc geninfo_all_blocks=1 00:17:15.041 --rc geninfo_unexecuted_blocks=1 00:17:15.041 00:17:15.041 ' 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:15.041 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.041 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:15.042 Cannot find device "nvmf_init_br" 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:15.042 Cannot find device "nvmf_init_br2" 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:15.042 Cannot find device "nvmf_tgt_br" 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:15.042 Cannot find device "nvmf_tgt_br2" 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:15.042 Cannot find device "nvmf_init_br" 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:15.042 Cannot find device "nvmf_init_br2" 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:15.042 Cannot find device "nvmf_tgt_br" 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:15.042 Cannot find device "nvmf_tgt_br2" 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:15.042 Cannot find device "nvmf_br" 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:15.042 Cannot find device "nvmf_init_if" 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:15.042 Cannot find device "nvmf_init_if2" 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:15.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:15.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:15.042 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:15.302 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:15.302 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:17:15.302 00:17:15.302 --- 10.0.0.3 ping statistics --- 00:17:15.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.302 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:15.302 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:15.302 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:15.302 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:17:15.302 00:17:15.302 --- 10.0.0.4 ping statistics --- 00:17:15.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.302 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:15.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:15.303 00:17:15.303 --- 10.0.0.1 ping statistics --- 00:17:15.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.303 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:15.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:17:15.303 00:17:15.303 --- 10.0.0.2 ping statistics --- 00:17:15.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.303 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76105 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76105 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 76105 ']' 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:15.303 10:40:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.303 [2024-11-15 10:40:40.789734] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:17:15.303 [2024-11-15 10:40:40.789832] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.562 [2024-11-15 10:40:40.941348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.562 [2024-11-15 10:40:41.009900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.562 [2024-11-15 10:40:41.009971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.562 [2024-11-15 10:40:41.009986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.562 [2024-11-15 10:40:41.009997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.562 [2024-11-15 10:40:41.010006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.562 [2024-11-15 10:40:41.010451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.820 [2024-11-15 10:40:41.068237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.455 [2024-11-15 10:40:41.878159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.455 [2024-11-15 10:40:41.886255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.455 null0 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.455 null1 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76137 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76137 /tmp/host.sock 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 76137 ']' 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:16.455 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:16.455 10:40:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.730 [2024-11-15 10:40:41.976949] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:17:16.730 [2024-11-15 10:40:41.977051] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76137 ] 00:17:16.730 [2024-11-15 10:40:42.133634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.730 [2024-11-15 10:40:42.198433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.989 [2024-11-15 10:40:42.256250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:16.989 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:16.990 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.248 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 [2024-11-15 10:40:42.670573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:17.249 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:17:17.508 10:40:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:17:18.075 [2024-11-15 10:40:43.351251] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:18.075 [2024-11-15 10:40:43.351300] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:18.075 [2024-11-15 10:40:43.351325] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:18.075 [2024-11-15 10:40:43.357295] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:18.075 [2024-11-15 10:40:43.411733] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:17:18.075 [2024-11-15 10:40:43.412959] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1688e50:1 started. 00:17:18.075 [2024-11-15 10:40:43.414885] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:18.075 [2024-11-15 10:40:43.414914] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:18.075 [2024-11-15 10:40:43.419856] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1688e50 was disconnected and freed. delete nvme_qpair. 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:18.642 10:40:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.642 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:18.901 [2024-11-15 10:40:44.143640] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1661640:1 started. 00:17:18.901 [2024-11-15 10:40:44.150201] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1661640 was disconnected and freed. delete nvme_qpair. 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:18.901 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.902 [2024-11-15 10:40:44.252226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:18.902 [2024-11-15 10:40:44.253257] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:18.902 [2024-11-15 10:40:44.253299] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:18.902 [2024-11-15 10:40:44.259249] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.902 [2024-11-15 10:40:44.324815] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:17:18.902 [2024-11-15 10:40:44.324990] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:18.902 [2024-11-15 10:40:44.325007] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:18.902 [2024-11-15 10:40:44.325013] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:18.902 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.162 [2024-11-15 10:40:44.468887] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:19.162 [2024-11-15 10:40:44.468928] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:19.162 [2024-11-15 10:40:44.471895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.162 [2024-11-15 10:40:44.471938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.162 [2024-11-15 10:40:44.471953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.162 [2024-11-15 10:40:44.471962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.162 [2024-11-15 10:40:44.471972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.162 [2024-11-15 10:40:44.471982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.162 [2024-11-15 10:40:44.471992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.162 [2024-11-15 10:40:44.472001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.162 [2024-11-15 10:40:44.472010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1665230 is same with the state(6) to be set 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:19.162 [2024-11-15 10:40:44.474879] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:17:19.162 [2024-11-15 10:40:44.474906] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:19.162 [2024-11-15 10:40:44.474977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1665230 (9): Bad file descriptor 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:17:19.162 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:19.163 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.420 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.421 10:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.829 [2024-11-15 10:40:45.901855] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:20.829 [2024-11-15 10:40:45.901887] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:20.829 [2024-11-15 10:40:45.901907] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:20.829 [2024-11-15 10:40:45.907889] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:17:20.829 [2024-11-15 10:40:45.966214] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:17:20.829 [2024-11-15 10:40:45.967004] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1695f80:1 started. 00:17:20.829 [2024-11-15 10:40:45.968636] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:20.829 [2024-11-15 10:40:45.968672] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:17:20.829 [2024-11-15 10:40:45.970816] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:20.829 695f80 was disconnected and freed. delete nvme_qpair. 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.829 request: 00:17:20.829 { 00:17:20.829 "name": "nvme", 00:17:20.829 "trtype": "tcp", 00:17:20.829 "traddr": "10.0.0.3", 00:17:20.829 "adrfam": "ipv4", 00:17:20.829 "trsvcid": "8009", 00:17:20.829 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:20.829 "wait_for_attach": true, 00:17:20.829 "method": "bdev_nvme_start_discovery", 00:17:20.829 "req_id": 1 00:17:20.829 } 00:17:20.829 Got JSON-RPC error response 00:17:20.829 response: 00:17:20.829 { 00:17:20.829 "code": -17, 00:17:20.829 "message": "File exists" 00:17:20.829 } 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.829 10:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.829 request: 00:17:20.829 { 00:17:20.829 "name": "nvme_second", 00:17:20.829 "trtype": "tcp", 00:17:20.829 "traddr": "10.0.0.3", 00:17:20.829 "adrfam": "ipv4", 00:17:20.829 "trsvcid": "8009", 00:17:20.829 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:20.829 "wait_for_attach": true, 00:17:20.829 "method": "bdev_nvme_start_discovery", 00:17:20.829 "req_id": 1 00:17:20.829 } 00:17:20.829 Got JSON-RPC error response 00:17:20.829 response: 00:17:20.829 { 00:17:20.829 "code": -17, 00:17:20.829 "message": "File exists" 00:17:20.829 } 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.829 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.830 10:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.766 [2024-11-15 10:40:47.233316] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:21.766 [2024-11-15 10:40:47.233427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1698d50 with addr=10.0.0.3, port=8010 00:17:21.766 [2024-11-15 10:40:47.233453] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:21.766 [2024-11-15 10:40:47.233464] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:21.766 [2024-11-15 10:40:47.233473] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:23.140 [2024-11-15 10:40:48.233319] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:23.140 [2024-11-15 10:40:48.233419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1698d50 with addr=10.0.0.3, port=8010 00:17:23.140 [2024-11-15 10:40:48.233446] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:23.140 [2024-11-15 10:40:48.233456] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:23.140 [2024-11-15 10:40:48.233467] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:24.075 [2024-11-15 10:40:49.233170] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:17:24.075 request: 00:17:24.075 { 00:17:24.075 "name": "nvme_second", 00:17:24.075 "trtype": "tcp", 00:17:24.075 "traddr": "10.0.0.3", 00:17:24.075 "adrfam": "ipv4", 00:17:24.075 "trsvcid": "8010", 00:17:24.076 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:24.076 "wait_for_attach": false, 00:17:24.076 "attach_timeout_ms": 3000, 00:17:24.076 "method": "bdev_nvme_start_discovery", 00:17:24.076 "req_id": 1 00:17:24.076 } 00:17:24.076 Got JSON-RPC error response 00:17:24.076 response: 00:17:24.076 { 00:17:24.076 "code": -110, 00:17:24.076 "message": "Connection timed out" 00:17:24.076 } 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76137 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:24.076 rmmod nvme_tcp 00:17:24.076 rmmod nvme_fabrics 00:17:24.076 rmmod nvme_keyring 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76105 ']' 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76105 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 76105 ']' 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 76105 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76105 00:17:24.076 killing process with pid 76105 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76105' 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 76105 00:17:24.076 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 76105 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:24.335 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:17:24.594 00:17:24.594 real 0m9.864s 00:17:24.594 user 0m18.052s 00:17:24.594 sys 0m2.092s 00:17:24.594 ************************************ 00:17:24.594 END TEST nvmf_host_discovery 00:17:24.594 ************************************ 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:24.594 10:40:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.594 ************************************ 00:17:24.594 START TEST nvmf_host_multipath_status 00:17:24.594 ************************************ 00:17:24.594 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:24.594 * Looking for test storage... 00:17:24.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:24.594 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:24.594 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:17:24.594 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:24.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.854 --rc genhtml_branch_coverage=1 00:17:24.854 --rc genhtml_function_coverage=1 00:17:24.854 --rc genhtml_legend=1 00:17:24.854 --rc geninfo_all_blocks=1 00:17:24.854 --rc geninfo_unexecuted_blocks=1 00:17:24.854 00:17:24.854 ' 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:24.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.854 --rc genhtml_branch_coverage=1 00:17:24.854 --rc genhtml_function_coverage=1 00:17:24.854 --rc genhtml_legend=1 00:17:24.854 --rc geninfo_all_blocks=1 00:17:24.854 --rc geninfo_unexecuted_blocks=1 00:17:24.854 00:17:24.854 ' 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:24.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.854 --rc genhtml_branch_coverage=1 00:17:24.854 --rc genhtml_function_coverage=1 00:17:24.854 --rc genhtml_legend=1 00:17:24.854 --rc geninfo_all_blocks=1 00:17:24.854 --rc geninfo_unexecuted_blocks=1 00:17:24.854 00:17:24.854 ' 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:24.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.854 --rc genhtml_branch_coverage=1 00:17:24.854 --rc genhtml_function_coverage=1 00:17:24.854 --rc genhtml_legend=1 00:17:24.854 --rc geninfo_all_blocks=1 00:17:24.854 --rc geninfo_unexecuted_blocks=1 00:17:24.854 00:17:24.854 ' 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:24.854 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:24.855 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:24.855 Cannot find device "nvmf_init_br" 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:24.855 Cannot find device "nvmf_init_br2" 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:24.855 Cannot find device "nvmf_tgt_br" 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:24.855 Cannot find device "nvmf_tgt_br2" 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:24.855 Cannot find device "nvmf_init_br" 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:24.855 Cannot find device "nvmf_init_br2" 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:17:24.855 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:24.855 Cannot find device "nvmf_tgt_br" 00:17:24.856 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:17:24.856 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:24.856 Cannot find device "nvmf_tgt_br2" 00:17:24.856 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:17:24.856 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:24.856 Cannot find device "nvmf_br" 00:17:24.856 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:17:24.856 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:24.856 Cannot find device "nvmf_init_if" 00:17:24.856 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:17:24.856 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:24.856 Cannot find device "nvmf_init_if2" 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:25.115 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:25.115 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:25.115 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:25.115 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:17:25.115 00:17:25.115 --- 10.0.0.3 ping statistics --- 00:17:25.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.115 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:25.115 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:25.115 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:17:25.115 00:17:25.115 --- 10.0.0.4 ping statistics --- 00:17:25.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.115 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:25.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:25.115 00:17:25.115 --- 10.0.0.1 ping statistics --- 00:17:25.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.115 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:25.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:17:25.115 00:17:25.115 --- 10.0.0.2 ping statistics --- 00:17:25.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.115 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:25.115 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:25.374 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:25.374 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:25.374 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:25.374 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:25.374 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76636 00:17:25.374 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:25.374 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76636 00:17:25.374 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76636 ']' 00:17:25.374 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.374 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:25.374 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.374 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:25.374 10:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:25.374 [2024-11-15 10:40:50.692264] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:17:25.374 [2024-11-15 10:40:50.692365] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.374 [2024-11-15 10:40:50.846690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:25.634 [2024-11-15 10:40:50.913992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.634 [2024-11-15 10:40:50.914062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.634 [2024-11-15 10:40:50.914077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.634 [2024-11-15 10:40:50.914088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.634 [2024-11-15 10:40:50.914097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.634 [2024-11-15 10:40:50.915407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.634 [2024-11-15 10:40:50.915422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.634 [2024-11-15 10:40:50.975365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:25.634 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:25.634 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:17:25.634 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:25.634 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:25.634 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:25.634 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.634 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76636 00:17:25.634 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:25.893 [2024-11-15 10:40:51.384273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.151 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:26.409 Malloc0 00:17:26.409 10:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:26.668 10:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:26.925 10:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:27.183 [2024-11-15 10:40:52.623246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:27.183 10:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:27.441 [2024-11-15 10:40:52.931396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:27.698 10:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76686 00:17:27.698 10:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:27.698 10:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.698 10:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76686 /var/tmp/bdevperf.sock 00:17:27.698 10:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76686 ']' 00:17:27.698 10:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.698 10:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:27.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.698 10:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.698 10:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:27.698 10:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:28.632 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:28.632 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:17:28.632 10:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:28.890 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:29.147 Nvme0n1 00:17:29.147 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:29.721 Nvme0n1 00:17:29.721 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:29.721 10:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:31.618 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:31.618 10:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:31.876 10:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:32.134 10:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:33.067 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:33.067 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:33.067 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.067 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:33.632 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.632 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:33.632 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:33.632 10:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.632 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:33.632 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:33.632 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.632 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:34.197 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.197 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:34.197 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.197 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:34.197 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.197 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:34.197 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:34.197 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.761 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.761 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:34.761 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:34.761 10:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.761 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.761 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:34.761 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:35.019 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:35.277 10:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:36.648 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:36.648 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:36.648 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.648 10:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:36.648 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:36.648 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:36.648 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.648 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:37.214 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.214 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:37.214 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.214 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:37.474 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.474 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:37.474 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.474 10:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:37.735 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.735 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:37.735 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.735 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:37.994 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.994 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:37.994 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.994 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:38.253 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:38.253 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:38.253 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:38.511 10:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:38.769 10:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:40.140 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:40.140 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:40.140 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.141 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:40.141 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.141 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:40.141 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.141 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:40.764 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:40.764 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:40.764 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.764 10:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:40.764 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.764 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:40.764 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:40.764 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:41.022 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:41.022 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:41.022 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:41.022 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:41.279 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:41.279 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:41.279 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:41.279 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:41.538 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:41.538 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:41.538 10:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:41.796 10:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:42.053 10:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:42.987 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:42.987 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:42.987 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.987 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:43.552 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.552 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:43.552 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:43.552 10:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.552 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:43.552 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:43.552 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.552 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:43.809 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.809 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:43.809 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.809 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:44.067 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.067 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:44.067 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:44.067 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.657 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.657 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:44.657 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.657 10:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:44.914 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:44.914 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:44.914 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:45.171 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:45.430 10:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:46.361 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:46.361 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:46.361 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.361 10:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:46.618 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:46.618 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:46.618 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.618 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:46.876 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:46.876 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:46.876 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.876 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:47.439 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.440 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:47.440 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.440 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:47.696 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.697 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:47.697 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.697 10:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:47.956 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:47.956 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:47.956 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:47.956 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:48.226 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:48.226 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:48.226 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:48.502 10:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:48.759 10:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:49.691 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:49.691 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:49.691 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:49.691 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.257 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:50.257 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:50.257 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:50.257 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.515 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.515 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:50.515 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.515 10:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:50.774 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.774 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:50.774 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.774 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:51.032 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.032 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:51.032 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.032 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:51.290 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:51.290 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:51.290 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:51.290 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.548 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.548 10:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:51.807 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:51.807 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:52.066 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:52.633 10:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:53.567 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:53.567 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:53.567 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.567 10:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:53.825 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:53.825 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:53.825 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:53.825 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.083 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.083 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:54.083 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.083 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:54.341 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.341 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:54.341 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:54.341 10:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.599 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.599 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:54.599 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.599 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:55.165 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.165 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:55.165 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:55.165 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.165 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.165 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:55.165 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:55.422 10:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:55.680 10:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:57.054 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:57.054 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:57.054 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.054 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:57.054 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:57.054 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:57.054 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:57.054 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.313 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.313 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:57.313 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:57.313 10:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.571 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.571 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:57.571 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.571 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:58.137 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.137 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:58.137 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:58.137 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.137 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.137 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:58.137 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:58.137 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.704 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.704 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:58.704 10:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:58.963 10:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:59.222 10:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:00.170 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:00.170 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:00.170 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.170 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:00.437 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.437 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:00.437 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.437 10:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:01.004 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.004 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:01.004 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.004 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:01.263 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.263 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:01.263 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.263 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:01.521 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.521 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:01.522 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.522 10:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:01.780 10:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.780 10:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:01.780 10:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:01.780 10:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.039 10:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:02.039 10:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:02.039 10:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:02.298 10:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:02.557 10:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:03.934 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:03.934 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:03.934 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.934 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:03.934 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.934 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:03.934 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:03.934 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.193 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:04.193 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:04.193 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.193 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:04.452 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.452 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:04.452 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.452 10:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:05.019 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:05.019 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:05.019 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:05.019 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:05.277 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:05.277 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:05.277 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:05.277 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:05.535 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:05.535 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76686 00:18:05.535 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76686 ']' 00:18:05.535 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76686 00:18:05.535 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:18:05.535 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:05.535 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76686 00:18:05.535 killing process with pid 76686 00:18:05.535 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:05.535 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:05.535 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76686' 00:18:05.535 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76686 00:18:05.535 10:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76686 00:18:05.535 { 00:18:05.535 "results": [ 00:18:05.535 { 00:18:05.535 "job": "Nvme0n1", 00:18:05.535 "core_mask": "0x4", 00:18:05.535 "workload": "verify", 00:18:05.535 "status": "terminated", 00:18:05.535 "verify_range": { 00:18:05.535 "start": 0, 00:18:05.535 "length": 16384 00:18:05.535 }, 00:18:05.535 "queue_depth": 128, 00:18:05.535 "io_size": 4096, 00:18:05.535 "runtime": 35.823517, 00:18:05.535 "iops": 8871.211612193187, 00:18:05.535 "mibps": 34.653170360129636, 00:18:05.535 "io_failed": 0, 00:18:05.535 "io_timeout": 0, 00:18:05.535 "avg_latency_us": 14396.600266675972, 00:18:05.535 "min_latency_us": 140.56727272727272, 00:18:05.535 "max_latency_us": 4026531.84 00:18:05.535 } 00:18:05.535 ], 00:18:05.535 "core_count": 1 00:18:05.535 } 00:18:05.795 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76686 00:18:05.795 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:05.795 [2024-11-15 10:40:53.006150] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:18:05.795 [2024-11-15 10:40:53.006266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76686 ] 00:18:05.795 [2024-11-15 10:40:53.150530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.795 [2024-11-15 10:40:53.211333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.795 [2024-11-15 10:40:53.264728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:05.795 Running I/O for 90 seconds... 00:18:05.795 8144.00 IOPS, 31.81 MiB/s [2024-11-15T10:41:31.293Z] 8800.00 IOPS, 34.38 MiB/s [2024-11-15T10:41:31.293Z] 9002.67 IOPS, 35.17 MiB/s [2024-11-15T10:41:31.293Z] 9112.00 IOPS, 35.59 MiB/s [2024-11-15T10:41:31.293Z] 9168.00 IOPS, 35.81 MiB/s [2024-11-15T10:41:31.293Z] 9185.00 IOPS, 35.88 MiB/s [2024-11-15T10:41:31.293Z] 9205.43 IOPS, 35.96 MiB/s [2024-11-15T10:41:31.293Z] 9211.25 IOPS, 35.98 MiB/s [2024-11-15T10:41:31.293Z] 9209.56 IOPS, 35.97 MiB/s [2024-11-15T10:41:31.293Z] 9208.60 IOPS, 35.97 MiB/s [2024-11-15T10:41:31.293Z] 9226.73 IOPS, 36.04 MiB/s [2024-11-15T10:41:31.293Z] 9243.17 IOPS, 36.11 MiB/s [2024-11-15T10:41:31.293Z] 9255.23 IOPS, 36.15 MiB/s [2024-11-15T10:41:31.293Z] 9267.57 IOPS, 36.20 MiB/s [2024-11-15T10:41:31.293Z] 9275.60 IOPS, 36.23 MiB/s [2024-11-15T10:41:31.293Z] [2024-11-15 10:41:10.435953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.795 [2024-11-15 10:41:10.436030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.795 [2024-11-15 10:41:10.436090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.795 [2024-11-15 10:41:10.436111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:05.795 [2024-11-15 10:41:10.436137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.436153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.436192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.436230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.436268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.436306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.436343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.436964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.436980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.796 [2024-11-15 10:41:10.437019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.437257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.437304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.437346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.437385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.437425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.437464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.437503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.437560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.437599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.437666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.437717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.437757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:05.796 [2024-11-15 10:41:10.437780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.796 [2024-11-15 10:41:10.437796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.437820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.437835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.437859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.437874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.437897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.437913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.437936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.437951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.437975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.437990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.438029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.438068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.438108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.438155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.438195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.438234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.797 [2024-11-15 10:41:10.438890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.438935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.438974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.438998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.439013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.439036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.439052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.439075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.439091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.439114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.439134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:05.797 [2024-11-15 10:41:10.439166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.797 [2024-11-15 10:41:10.439182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.439221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.439260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.439299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.439338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.439377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.439417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.439456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.439496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.439552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.439593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.439632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.439680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.439719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.439758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.439802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.439841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.439880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.439919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.439959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.439982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.439997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.440037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.440082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.440121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.440167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.440208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.440253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.440292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.440331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.440371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.440410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.440454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.440494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.798 [2024-11-15 10:41:10.440550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.440590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:05.798 [2024-11-15 10:41:10.440614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.798 [2024-11-15 10:41:10.440630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.440653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.440683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.440708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.440724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.440752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.440768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.440792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.440808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.440832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.440847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.440871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.440887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.440910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.440926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.440949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.440965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.440988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.441003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.441027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.441042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.441066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.441081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.441104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.441125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.441149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.441164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.441195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:10.441222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.441245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.799 [2024-11-15 10:41:10.441261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.441284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.799 [2024-11-15 10:41:10.441300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.441324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.799 [2024-11-15 10:41:10.441339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.441363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.799 [2024-11-15 10:41:10.441378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.441408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.799 [2024-11-15 10:41:10.441424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.441447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.799 [2024-11-15 10:41:10.441463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.441486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.799 [2024-11-15 10:41:10.441502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:10.441538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.799 [2024-11-15 10:41:10.441556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.799 8923.38 IOPS, 34.86 MiB/s [2024-11-15T10:41:31.297Z] 8398.47 IOPS, 32.81 MiB/s [2024-11-15T10:41:31.297Z] 7931.89 IOPS, 30.98 MiB/s [2024-11-15T10:41:31.297Z] 7514.42 IOPS, 29.35 MiB/s [2024-11-15T10:41:31.297Z] 7423.05 IOPS, 29.00 MiB/s [2024-11-15T10:41:31.297Z] 7514.52 IOPS, 29.35 MiB/s [2024-11-15T10:41:31.297Z] 7597.23 IOPS, 29.68 MiB/s [2024-11-15T10:41:31.297Z] 7750.39 IOPS, 30.27 MiB/s [2024-11-15T10:41:31.297Z] 7946.42 IOPS, 31.04 MiB/s [2024-11-15T10:41:31.297Z] 8114.64 IOPS, 31.70 MiB/s [2024-11-15T10:41:31.297Z] 8270.00 IOPS, 32.30 MiB/s [2024-11-15T10:41:31.297Z] 8312.74 IOPS, 32.47 MiB/s [2024-11-15T10:41:31.297Z] 8349.82 IOPS, 32.62 MiB/s [2024-11-15T10:41:31.297Z] 8386.45 IOPS, 32.76 MiB/s [2024-11-15T10:41:31.297Z] 8470.10 IOPS, 33.09 MiB/s [2024-11-15T10:41:31.297Z] 8604.58 IOPS, 33.61 MiB/s [2024-11-15T10:41:31.297Z] 8731.69 IOPS, 34.11 MiB/s [2024-11-15T10:41:31.297Z] [2024-11-15 10:41:27.995313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.799 [2024-11-15 10:41:27.995390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:27.995448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.799 [2024-11-15 10:41:27.995500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:27.995551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.799 [2024-11-15 10:41:27.995571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:27.995594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.799 [2024-11-15 10:41:27.995610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:27.995632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.799 [2024-11-15 10:41:27.995647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:27.995669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:27.995684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:05.799 [2024-11-15 10:41:27.995706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.799 [2024-11-15 10:41:27.995721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.995742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.995757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.995778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.800 [2024-11-15 10:41:27.995793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.995815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.995830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.995851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.995868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.995891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.995905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.995927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.995941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.995964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.800 [2024-11-15 10:41:27.995991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.996030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.996067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.996104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.996143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.800 [2024-11-15 10:41:27.996186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.800 [2024-11-15 10:41:27.996225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.800 [2024-11-15 10:41:27.996261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.800 [2024-11-15 10:41:27.996298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.800 [2024-11-15 10:41:27.996335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.800 [2024-11-15 10:41:27.996381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.996417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.996453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.996499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.800 [2024-11-15 10:41:27.996555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.996592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.996629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.996666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.996703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.800 [2024-11-15 10:41:27.996740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.996778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:05.800 [2024-11-15 10:41:27.996799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.800 [2024-11-15 10:41:27.996815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.996837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.996852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.996874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.996889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.996911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.801 [2024-11-15 10:41:27.996926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.996956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.996972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.996994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.801 [2024-11-15 10:41:27.997230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.801 [2024-11-15 10:41:27.997267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.801 [2024-11-15 10:41:27.997303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.801 [2024-11-15 10:41:27.997712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.801 [2024-11-15 10:41:27.997749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.997808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.997824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.999201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.801 [2024-11-15 10:41:27.999231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.999260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.801 [2024-11-15 10:41:27.999290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.999314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.999331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.999352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.801 [2024-11-15 10:41:27.999368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:05.801 [2024-11-15 10:41:27.999390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.802 [2024-11-15 10:41:27.999405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:05.802 8841.03 IOPS, 34.54 MiB/s [2024-11-15T10:41:31.300Z] 8853.24 IOPS, 34.58 MiB/s [2024-11-15T10:41:31.300Z] 8864.97 IOPS, 34.63 MiB/s [2024-11-15T10:41:31.300Z] Received shutdown signal, test time was about 35.824339 seconds 00:18:05.802 00:18:05.802 Latency(us) 00:18:05.802 [2024-11-15T10:41:31.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.802 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:05.802 Verification LBA range: start 0x0 length 0x4000 00:18:05.802 Nvme0n1 : 35.82 8871.21 34.65 0.00 0.00 14396.60 140.57 4026531.84 00:18:05.802 [2024-11-15T10:41:31.300Z] =================================================================================================================== 00:18:05.802 [2024-11-15T10:41:31.300Z] Total : 8871.21 34.65 0.00 0.00 14396.60 140.57 4026531.84 00:18:05.802 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:06.060 rmmod nvme_tcp 00:18:06.060 rmmod nvme_fabrics 00:18:06.060 rmmod nvme_keyring 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76636 ']' 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76636 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76636 ']' 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76636 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76636 00:18:06.060 killing process with pid 76636 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76636' 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76636 00:18:06.060 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76636 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:06.319 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:18:06.579 00:18:06.579 real 0m41.949s 00:18:06.579 user 2m16.722s 00:18:06.579 sys 0m11.891s 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:06.579 ************************************ 00:18:06.579 END TEST nvmf_host_multipath_status 00:18:06.579 ************************************ 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:06.579 10:41:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.579 ************************************ 00:18:06.579 START TEST nvmf_discovery_remove_ifc 00:18:06.579 ************************************ 00:18:06.579 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:06.839 * Looking for test storage... 00:18:06.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.839 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:06.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.840 --rc genhtml_branch_coverage=1 00:18:06.840 --rc genhtml_function_coverage=1 00:18:06.840 --rc genhtml_legend=1 00:18:06.840 --rc geninfo_all_blocks=1 00:18:06.840 --rc geninfo_unexecuted_blocks=1 00:18:06.840 00:18:06.840 ' 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:06.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.840 --rc genhtml_branch_coverage=1 00:18:06.840 --rc genhtml_function_coverage=1 00:18:06.840 --rc genhtml_legend=1 00:18:06.840 --rc geninfo_all_blocks=1 00:18:06.840 --rc geninfo_unexecuted_blocks=1 00:18:06.840 00:18:06.840 ' 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:06.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.840 --rc genhtml_branch_coverage=1 00:18:06.840 --rc genhtml_function_coverage=1 00:18:06.840 --rc genhtml_legend=1 00:18:06.840 --rc geninfo_all_blocks=1 00:18:06.840 --rc geninfo_unexecuted_blocks=1 00:18:06.840 00:18:06.840 ' 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:06.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.840 --rc genhtml_branch_coverage=1 00:18:06.840 --rc genhtml_function_coverage=1 00:18:06.840 --rc genhtml_legend=1 00:18:06.840 --rc geninfo_all_blocks=1 00:18:06.840 --rc geninfo_unexecuted_blocks=1 00:18:06.840 00:18:06.840 ' 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:06.840 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:06.840 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:06.841 Cannot find device "nvmf_init_br" 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:06.841 Cannot find device "nvmf_init_br2" 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:06.841 Cannot find device "nvmf_tgt_br" 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:06.841 Cannot find device "nvmf_tgt_br2" 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:06.841 Cannot find device "nvmf_init_br" 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:06.841 Cannot find device "nvmf_init_br2" 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:18:06.841 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:07.099 Cannot find device "nvmf_tgt_br" 00:18:07.099 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:18:07.099 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:07.099 Cannot find device "nvmf_tgt_br2" 00:18:07.099 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:18:07.099 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:07.099 Cannot find device "nvmf_br" 00:18:07.099 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:18:07.099 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:07.099 Cannot find device "nvmf_init_if" 00:18:07.099 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:07.100 Cannot find device "nvmf_init_if2" 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:07.100 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:07.100 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:18:07.100 00:18:07.100 --- 10.0.0.3 ping statistics --- 00:18:07.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.100 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:07.100 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:07.100 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:18:07.100 00:18:07.100 --- 10.0.0.4 ping statistics --- 00:18:07.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.100 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:07.100 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:07.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:07.360 00:18:07.360 --- 10.0.0.1 ping statistics --- 00:18:07.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.360 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:07.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:18:07.360 00:18:07.360 --- 10.0.0.2 ping statistics --- 00:18:07.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.360 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77552 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77552 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77552 ']' 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:07.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:07.360 10:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:07.360 [2024-11-15 10:41:32.685706] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:18:07.361 [2024-11-15 10:41:32.685785] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.361 [2024-11-15 10:41:32.839477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.620 [2024-11-15 10:41:32.907417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.620 [2024-11-15 10:41:32.907486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.620 [2024-11-15 10:41:32.907502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.620 [2024-11-15 10:41:32.907538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.620 [2024-11-15 10:41:32.907549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.620 [2024-11-15 10:41:32.908016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.620 [2024-11-15 10:41:32.966475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:07.620 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:07.620 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:18:07.620 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:07.620 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:07.620 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:07.620 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.620 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:07.620 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.620 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:07.620 [2024-11-15 10:41:33.093167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.620 [2024-11-15 10:41:33.101343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:07.620 null0 00:18:07.879 [2024-11-15 10:41:33.133223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:07.879 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.879 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77571 00:18:07.879 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:07.879 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77571 /tmp/host.sock 00:18:07.879 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77571 ']' 00:18:07.879 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:18:07.879 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:07.879 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:07.879 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:07.879 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:07.879 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:07.879 [2024-11-15 10:41:33.216828] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:18:07.879 [2024-11-15 10:41:33.216934] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77571 ] 00:18:07.879 [2024-11-15 10:41:33.368021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.150 [2024-11-15 10:41:33.434786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:08.150 [2024-11-15 10:41:33.550785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.150 10:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:09.541 [2024-11-15 10:41:34.616470] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:09.542 [2024-11-15 10:41:34.616520] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:09.542 [2024-11-15 10:41:34.616547] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:09.542 [2024-11-15 10:41:34.622547] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:09.542 [2024-11-15 10:41:34.677012] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:09.542 [2024-11-15 10:41:34.678069] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xc4cfb0:1 started. 00:18:09.542 [2024-11-15 10:41:34.680029] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:09.542 [2024-11-15 10:41:34.680107] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:09.542 [2024-11-15 10:41:34.680137] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:09.542 [2024-11-15 10:41:34.680153] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:09.542 [2024-11-15 10:41:34.680178] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:09.542 [2024-11-15 10:41:34.684945] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xc4cfb0 was disconnected and freed. delete nvme_qpair. 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:09.542 10:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:10.477 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:10.477 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:10.477 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:10.477 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:10.477 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.477 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:10.477 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:10.477 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.477 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:10.477 10:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:11.408 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:11.408 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.408 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.408 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:11.408 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:11.408 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:11.408 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:11.666 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.666 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:11.666 10:41:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:12.601 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:12.601 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:12.601 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:12.601 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.601 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:12.601 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:12.601 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:12.601 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.601 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:12.601 10:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:13.536 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:13.536 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:13.536 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:13.536 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:13.536 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.536 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:13.536 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:13.536 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.794 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:13.794 10:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:14.727 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:14.727 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:14.727 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:14.727 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:14.727 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:14.727 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.727 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:14.727 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.727 [2024-11-15 10:41:40.107616] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:14.727 [2024-11-15 10:41:40.107699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.727 [2024-11-15 10:41:40.107716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.727 [2024-11-15 10:41:40.107731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.727 [2024-11-15 10:41:40.107741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.728 [2024-11-15 10:41:40.107751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.728 [2024-11-15 10:41:40.107760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.728 [2024-11-15 10:41:40.107771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.728 [2024-11-15 10:41:40.107780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.728 [2024-11-15 10:41:40.107790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.728 [2024-11-15 10:41:40.107799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.728 [2024-11-15 10:41:40.107809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc29240 is same with the state(6) to be set 00:18:14.728 [2024-11-15 10:41:40.117611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc29240 (9): Bad file descriptor 00:18:14.728 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:14.728 10:41:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:14.728 [2024-11-15 10:41:40.127629] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:14.728 [2024-11-15 10:41:40.127658] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:14.728 [2024-11-15 10:41:40.127668] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:14.728 [2024-11-15 10:41:40.127675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:14.728 [2024-11-15 10:41:40.127716] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:15.662 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:15.662 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:15.662 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:15.662 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.662 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:15.662 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:15.662 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:15.922 [2024-11-15 10:41:41.191650] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:18:15.922 [2024-11-15 10:41:41.191783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc29240 with addr=10.0.0.3, port=4420 00:18:15.922 [2024-11-15 10:41:41.191821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc29240 is same with the state(6) to be set 00:18:15.922 [2024-11-15 10:41:41.191892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc29240 (9): Bad file descriptor 00:18:15.922 [2024-11-15 10:41:41.192817] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:18:15.922 [2024-11-15 10:41:41.192917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:15.922 [2024-11-15 10:41:41.192943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:15.922 [2024-11-15 10:41:41.192965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:15.922 [2024-11-15 10:41:41.192985] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:15.922 [2024-11-15 10:41:41.192999] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:15.922 [2024-11-15 10:41:41.193011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:15.922 [2024-11-15 10:41:41.193032] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:15.922 [2024-11-15 10:41:41.193045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:15.922 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.922 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:15.922 10:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:16.858 [2024-11-15 10:41:42.193122] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:16.858 [2024-11-15 10:41:42.193196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:16.858 [2024-11-15 10:41:42.193227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:16.858 [2024-11-15 10:41:42.193255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:16.858 [2024-11-15 10:41:42.193265] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:18:16.858 [2024-11-15 10:41:42.193274] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:16.858 [2024-11-15 10:41:42.193281] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:16.858 [2024-11-15 10:41:42.193286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:16.858 [2024-11-15 10:41:42.193322] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:18:16.859 [2024-11-15 10:41:42.193379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.859 [2024-11-15 10:41:42.193395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.859 [2024-11-15 10:41:42.193408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.859 [2024-11-15 10:41:42.193417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.859 [2024-11-15 10:41:42.193426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.859 [2024-11-15 10:41:42.193435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.859 [2024-11-15 10:41:42.193445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.859 [2024-11-15 10:41:42.193453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.859 [2024-11-15 10:41:42.193479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.859 [2024-11-15 10:41:42.193505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.859 [2024-11-15 10:41:42.193515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:18:16.859 [2024-11-15 10:41:42.193536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb4a20 (9): Bad file descriptor 00:18:16.859 [2024-11-15 10:41:42.194209] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:16.859 [2024-11-15 10:41:42.194235] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:16.859 10:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:18.234 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:18.234 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:18.234 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:18.234 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:18.234 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.234 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:18.234 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:18.234 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.234 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:18.234 10:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:18.800 [2024-11-15 10:41:44.201217] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:18.800 [2024-11-15 10:41:44.201257] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:18.800 [2024-11-15 10:41:44.201278] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:18.800 [2024-11-15 10:41:44.207257] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:18:18.800 [2024-11-15 10:41:44.261602] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:18:18.800 [2024-11-15 10:41:44.262475] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xc059f0:1 started. 00:18:18.800 [2024-11-15 10:41:44.263810] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:18.800 [2024-11-15 10:41:44.263858] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:18.800 [2024-11-15 10:41:44.263884] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:18.800 [2024-11-15 10:41:44.263900] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:18:18.800 [2024-11-15 10:41:44.263910] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:18.800 [2024-11-15 10:41:44.269980] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xc059f0 was disconnected and freed. delete nvme_qpair. 00:18:19.057 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:19.057 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:19.057 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.057 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:19.057 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77571 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77571 ']' 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77571 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77571 00:18:19.058 killing process with pid 77571 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77571' 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77571 00:18:19.058 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77571 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:19.314 rmmod nvme_tcp 00:18:19.314 rmmod nvme_fabrics 00:18:19.314 rmmod nvme_keyring 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77552 ']' 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77552 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77552 ']' 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77552 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:19.314 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77552 00:18:19.571 killing process with pid 77552 00:18:19.571 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:19.571 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:19.571 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77552' 00:18:19.571 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77552 00:18:19.571 10:41:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77552 00:18:19.571 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:19.571 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:19.571 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:19.571 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:18:19.571 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:18:19.571 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:19.571 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:18:19.571 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:19.571 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:19.571 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:19.571 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:18:19.829 00:18:19.829 real 0m13.253s 00:18:19.829 user 0m22.369s 00:18:19.829 sys 0m2.586s 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:19.829 ************************************ 00:18:19.829 END TEST nvmf_discovery_remove_ifc 00:18:19.829 ************************************ 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.829 ************************************ 00:18:19.829 START TEST nvmf_identify_kernel_target 00:18:19.829 ************************************ 00:18:19.829 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:20.089 * Looking for test storage... 00:18:20.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:18:20.089 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:20.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.090 --rc genhtml_branch_coverage=1 00:18:20.090 --rc genhtml_function_coverage=1 00:18:20.090 --rc genhtml_legend=1 00:18:20.090 --rc geninfo_all_blocks=1 00:18:20.090 --rc geninfo_unexecuted_blocks=1 00:18:20.090 00:18:20.090 ' 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:20.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.090 --rc genhtml_branch_coverage=1 00:18:20.090 --rc genhtml_function_coverage=1 00:18:20.090 --rc genhtml_legend=1 00:18:20.090 --rc geninfo_all_blocks=1 00:18:20.090 --rc geninfo_unexecuted_blocks=1 00:18:20.090 00:18:20.090 ' 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:20.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.090 --rc genhtml_branch_coverage=1 00:18:20.090 --rc genhtml_function_coverage=1 00:18:20.090 --rc genhtml_legend=1 00:18:20.090 --rc geninfo_all_blocks=1 00:18:20.090 --rc geninfo_unexecuted_blocks=1 00:18:20.090 00:18:20.090 ' 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:20.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.090 --rc genhtml_branch_coverage=1 00:18:20.090 --rc genhtml_function_coverage=1 00:18:20.090 --rc genhtml_legend=1 00:18:20.090 --rc geninfo_all_blocks=1 00:18:20.090 --rc geninfo_unexecuted_blocks=1 00:18:20.090 00:18:20.090 ' 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:20.090 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:20.090 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:20.091 Cannot find device "nvmf_init_br" 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:20.091 Cannot find device "nvmf_init_br2" 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:20.091 Cannot find device "nvmf_tgt_br" 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:20.091 Cannot find device "nvmf_tgt_br2" 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:20.091 Cannot find device "nvmf_init_br" 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:20.091 Cannot find device "nvmf_init_br2" 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:18:20.091 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:20.349 Cannot find device "nvmf_tgt_br" 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:20.349 Cannot find device "nvmf_tgt_br2" 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:20.349 Cannot find device "nvmf_br" 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:20.349 Cannot find device "nvmf_init_if" 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:20.349 Cannot find device "nvmf_init_if2" 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:20.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:20.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:20.349 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:20.350 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:20.350 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:20.350 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:20.350 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:20.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:20.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:18:20.608 00:18:20.608 --- 10.0.0.3 ping statistics --- 00:18:20.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.608 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:20.608 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:20.608 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:18:20.608 00:18:20.608 --- 10.0.0.4 ping statistics --- 00:18:20.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.608 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:20.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:20.608 00:18:20.608 --- 10.0.0.1 ping statistics --- 00:18:20.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.608 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:20.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:18:20.608 00:18:20.608 --- 10.0.0.2 ping statistics --- 00:18:20.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.608 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:20.608 10:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:20.866 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:20.866 Waiting for block devices as requested 00:18:20.866 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:21.124 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:21.124 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:21.124 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:21.124 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:21.124 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:21.124 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:21.124 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:21.124 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:21.125 No valid GPT data, bailing 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:21.125 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:21.125 No valid GPT data, bailing 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:21.383 No valid GPT data, bailing 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:21.383 No valid GPT data, bailing 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:18:21.383 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:18:21.384 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:21.384 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid=50e4d619-cecf-4dd2-989d-1336dee31d8f -a 10.0.0.1 -t tcp -s 4420 00:18:21.384 00:18:21.384 Discovery Log Number of Records 2, Generation counter 2 00:18:21.384 =====Discovery Log Entry 0====== 00:18:21.384 trtype: tcp 00:18:21.384 adrfam: ipv4 00:18:21.384 subtype: current discovery subsystem 00:18:21.384 treq: not specified, sq flow control disable supported 00:18:21.384 portid: 1 00:18:21.384 trsvcid: 4420 00:18:21.384 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:21.384 traddr: 10.0.0.1 00:18:21.384 eflags: none 00:18:21.384 sectype: none 00:18:21.384 =====Discovery Log Entry 1====== 00:18:21.384 trtype: tcp 00:18:21.384 adrfam: ipv4 00:18:21.384 subtype: nvme subsystem 00:18:21.384 treq: not specified, sq flow control disable supported 00:18:21.384 portid: 1 00:18:21.384 trsvcid: 4420 00:18:21.384 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:21.384 traddr: 10.0.0.1 00:18:21.384 eflags: none 00:18:21.384 sectype: none 00:18:21.384 10:41:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:21.384 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:21.643 ===================================================== 00:18:21.643 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:21.643 ===================================================== 00:18:21.643 Controller Capabilities/Features 00:18:21.643 ================================ 00:18:21.643 Vendor ID: 0000 00:18:21.643 Subsystem Vendor ID: 0000 00:18:21.643 Serial Number: 3c2222c562bc29e603c4 00:18:21.643 Model Number: Linux 00:18:21.643 Firmware Version: 6.8.9-20 00:18:21.643 Recommended Arb Burst: 0 00:18:21.643 IEEE OUI Identifier: 00 00 00 00:18:21.643 Multi-path I/O 00:18:21.643 May have multiple subsystem ports: No 00:18:21.643 May have multiple controllers: No 00:18:21.643 Associated with SR-IOV VF: No 00:18:21.643 Max Data Transfer Size: Unlimited 00:18:21.643 Max Number of Namespaces: 0 00:18:21.643 Max Number of I/O Queues: 1024 00:18:21.643 NVMe Specification Version (VS): 1.3 00:18:21.643 NVMe Specification Version (Identify): 1.3 00:18:21.643 Maximum Queue Entries: 1024 00:18:21.643 Contiguous Queues Required: No 00:18:21.643 Arbitration Mechanisms Supported 00:18:21.643 Weighted Round Robin: Not Supported 00:18:21.643 Vendor Specific: Not Supported 00:18:21.643 Reset Timeout: 7500 ms 00:18:21.643 Doorbell Stride: 4 bytes 00:18:21.643 NVM Subsystem Reset: Not Supported 00:18:21.643 Command Sets Supported 00:18:21.643 NVM Command Set: Supported 00:18:21.643 Boot Partition: Not Supported 00:18:21.643 Memory Page Size Minimum: 4096 bytes 00:18:21.643 Memory Page Size Maximum: 4096 bytes 00:18:21.643 Persistent Memory Region: Not Supported 00:18:21.643 Optional Asynchronous Events Supported 00:18:21.643 Namespace Attribute Notices: Not Supported 00:18:21.643 Firmware Activation Notices: Not Supported 00:18:21.643 ANA Change Notices: Not Supported 00:18:21.643 PLE Aggregate Log Change Notices: Not Supported 00:18:21.643 LBA Status Info Alert Notices: Not Supported 00:18:21.643 EGE Aggregate Log Change Notices: Not Supported 00:18:21.643 Normal NVM Subsystem Shutdown event: Not Supported 00:18:21.643 Zone Descriptor Change Notices: Not Supported 00:18:21.643 Discovery Log Change Notices: Supported 00:18:21.643 Controller Attributes 00:18:21.643 128-bit Host Identifier: Not Supported 00:18:21.643 Non-Operational Permissive Mode: Not Supported 00:18:21.643 NVM Sets: Not Supported 00:18:21.643 Read Recovery Levels: Not Supported 00:18:21.643 Endurance Groups: Not Supported 00:18:21.643 Predictable Latency Mode: Not Supported 00:18:21.643 Traffic Based Keep ALive: Not Supported 00:18:21.643 Namespace Granularity: Not Supported 00:18:21.643 SQ Associations: Not Supported 00:18:21.643 UUID List: Not Supported 00:18:21.643 Multi-Domain Subsystem: Not Supported 00:18:21.643 Fixed Capacity Management: Not Supported 00:18:21.643 Variable Capacity Management: Not Supported 00:18:21.643 Delete Endurance Group: Not Supported 00:18:21.643 Delete NVM Set: Not Supported 00:18:21.643 Extended LBA Formats Supported: Not Supported 00:18:21.643 Flexible Data Placement Supported: Not Supported 00:18:21.643 00:18:21.643 Controller Memory Buffer Support 00:18:21.643 ================================ 00:18:21.643 Supported: No 00:18:21.643 00:18:21.643 Persistent Memory Region Support 00:18:21.643 ================================ 00:18:21.643 Supported: No 00:18:21.643 00:18:21.643 Admin Command Set Attributes 00:18:21.643 ============================ 00:18:21.643 Security Send/Receive: Not Supported 00:18:21.643 Format NVM: Not Supported 00:18:21.643 Firmware Activate/Download: Not Supported 00:18:21.643 Namespace Management: Not Supported 00:18:21.643 Device Self-Test: Not Supported 00:18:21.643 Directives: Not Supported 00:18:21.643 NVMe-MI: Not Supported 00:18:21.643 Virtualization Management: Not Supported 00:18:21.643 Doorbell Buffer Config: Not Supported 00:18:21.643 Get LBA Status Capability: Not Supported 00:18:21.643 Command & Feature Lockdown Capability: Not Supported 00:18:21.643 Abort Command Limit: 1 00:18:21.643 Async Event Request Limit: 1 00:18:21.643 Number of Firmware Slots: N/A 00:18:21.643 Firmware Slot 1 Read-Only: N/A 00:18:21.643 Firmware Activation Without Reset: N/A 00:18:21.643 Multiple Update Detection Support: N/A 00:18:21.643 Firmware Update Granularity: No Information Provided 00:18:21.643 Per-Namespace SMART Log: No 00:18:21.643 Asymmetric Namespace Access Log Page: Not Supported 00:18:21.643 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:21.643 Command Effects Log Page: Not Supported 00:18:21.643 Get Log Page Extended Data: Supported 00:18:21.643 Telemetry Log Pages: Not Supported 00:18:21.643 Persistent Event Log Pages: Not Supported 00:18:21.643 Supported Log Pages Log Page: May Support 00:18:21.643 Commands Supported & Effects Log Page: Not Supported 00:18:21.643 Feature Identifiers & Effects Log Page:May Support 00:18:21.643 NVMe-MI Commands & Effects Log Page: May Support 00:18:21.643 Data Area 4 for Telemetry Log: Not Supported 00:18:21.643 Error Log Page Entries Supported: 1 00:18:21.643 Keep Alive: Not Supported 00:18:21.643 00:18:21.643 NVM Command Set Attributes 00:18:21.643 ========================== 00:18:21.643 Submission Queue Entry Size 00:18:21.643 Max: 1 00:18:21.643 Min: 1 00:18:21.643 Completion Queue Entry Size 00:18:21.643 Max: 1 00:18:21.643 Min: 1 00:18:21.643 Number of Namespaces: 0 00:18:21.643 Compare Command: Not Supported 00:18:21.643 Write Uncorrectable Command: Not Supported 00:18:21.643 Dataset Management Command: Not Supported 00:18:21.643 Write Zeroes Command: Not Supported 00:18:21.643 Set Features Save Field: Not Supported 00:18:21.643 Reservations: Not Supported 00:18:21.643 Timestamp: Not Supported 00:18:21.643 Copy: Not Supported 00:18:21.643 Volatile Write Cache: Not Present 00:18:21.643 Atomic Write Unit (Normal): 1 00:18:21.643 Atomic Write Unit (PFail): 1 00:18:21.643 Atomic Compare & Write Unit: 1 00:18:21.643 Fused Compare & Write: Not Supported 00:18:21.643 Scatter-Gather List 00:18:21.643 SGL Command Set: Supported 00:18:21.643 SGL Keyed: Not Supported 00:18:21.643 SGL Bit Bucket Descriptor: Not Supported 00:18:21.643 SGL Metadata Pointer: Not Supported 00:18:21.643 Oversized SGL: Not Supported 00:18:21.643 SGL Metadata Address: Not Supported 00:18:21.643 SGL Offset: Supported 00:18:21.643 Transport SGL Data Block: Not Supported 00:18:21.643 Replay Protected Memory Block: Not Supported 00:18:21.643 00:18:21.643 Firmware Slot Information 00:18:21.643 ========================= 00:18:21.643 Active slot: 0 00:18:21.643 00:18:21.643 00:18:21.643 Error Log 00:18:21.643 ========= 00:18:21.643 00:18:21.643 Active Namespaces 00:18:21.643 ================= 00:18:21.644 Discovery Log Page 00:18:21.644 ================== 00:18:21.644 Generation Counter: 2 00:18:21.644 Number of Records: 2 00:18:21.644 Record Format: 0 00:18:21.644 00:18:21.644 Discovery Log Entry 0 00:18:21.644 ---------------------- 00:18:21.644 Transport Type: 3 (TCP) 00:18:21.644 Address Family: 1 (IPv4) 00:18:21.644 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:21.644 Entry Flags: 00:18:21.644 Duplicate Returned Information: 0 00:18:21.644 Explicit Persistent Connection Support for Discovery: 0 00:18:21.644 Transport Requirements: 00:18:21.644 Secure Channel: Not Specified 00:18:21.644 Port ID: 1 (0x0001) 00:18:21.644 Controller ID: 65535 (0xffff) 00:18:21.644 Admin Max SQ Size: 32 00:18:21.644 Transport Service Identifier: 4420 00:18:21.644 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:21.644 Transport Address: 10.0.0.1 00:18:21.644 Discovery Log Entry 1 00:18:21.644 ---------------------- 00:18:21.644 Transport Type: 3 (TCP) 00:18:21.644 Address Family: 1 (IPv4) 00:18:21.644 Subsystem Type: 2 (NVM Subsystem) 00:18:21.644 Entry Flags: 00:18:21.644 Duplicate Returned Information: 0 00:18:21.644 Explicit Persistent Connection Support for Discovery: 0 00:18:21.644 Transport Requirements: 00:18:21.644 Secure Channel: Not Specified 00:18:21.644 Port ID: 1 (0x0001) 00:18:21.644 Controller ID: 65535 (0xffff) 00:18:21.644 Admin Max SQ Size: 32 00:18:21.644 Transport Service Identifier: 4420 00:18:21.644 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:21.644 Transport Address: 10.0.0.1 00:18:21.644 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:21.903 get_feature(0x01) failed 00:18:21.903 get_feature(0x02) failed 00:18:21.903 get_feature(0x04) failed 00:18:21.903 ===================================================== 00:18:21.903 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:21.903 ===================================================== 00:18:21.903 Controller Capabilities/Features 00:18:21.903 ================================ 00:18:21.903 Vendor ID: 0000 00:18:21.903 Subsystem Vendor ID: 0000 00:18:21.903 Serial Number: 2a740764b7b555435f9f 00:18:21.903 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:21.903 Firmware Version: 6.8.9-20 00:18:21.903 Recommended Arb Burst: 6 00:18:21.903 IEEE OUI Identifier: 00 00 00 00:18:21.903 Multi-path I/O 00:18:21.903 May have multiple subsystem ports: Yes 00:18:21.903 May have multiple controllers: Yes 00:18:21.903 Associated with SR-IOV VF: No 00:18:21.903 Max Data Transfer Size: Unlimited 00:18:21.903 Max Number of Namespaces: 1024 00:18:21.903 Max Number of I/O Queues: 128 00:18:21.903 NVMe Specification Version (VS): 1.3 00:18:21.903 NVMe Specification Version (Identify): 1.3 00:18:21.903 Maximum Queue Entries: 1024 00:18:21.903 Contiguous Queues Required: No 00:18:21.903 Arbitration Mechanisms Supported 00:18:21.903 Weighted Round Robin: Not Supported 00:18:21.903 Vendor Specific: Not Supported 00:18:21.903 Reset Timeout: 7500 ms 00:18:21.903 Doorbell Stride: 4 bytes 00:18:21.903 NVM Subsystem Reset: Not Supported 00:18:21.903 Command Sets Supported 00:18:21.903 NVM Command Set: Supported 00:18:21.903 Boot Partition: Not Supported 00:18:21.903 Memory Page Size Minimum: 4096 bytes 00:18:21.904 Memory Page Size Maximum: 4096 bytes 00:18:21.904 Persistent Memory Region: Not Supported 00:18:21.904 Optional Asynchronous Events Supported 00:18:21.904 Namespace Attribute Notices: Supported 00:18:21.904 Firmware Activation Notices: Not Supported 00:18:21.904 ANA Change Notices: Supported 00:18:21.904 PLE Aggregate Log Change Notices: Not Supported 00:18:21.904 LBA Status Info Alert Notices: Not Supported 00:18:21.904 EGE Aggregate Log Change Notices: Not Supported 00:18:21.904 Normal NVM Subsystem Shutdown event: Not Supported 00:18:21.904 Zone Descriptor Change Notices: Not Supported 00:18:21.904 Discovery Log Change Notices: Not Supported 00:18:21.904 Controller Attributes 00:18:21.904 128-bit Host Identifier: Supported 00:18:21.904 Non-Operational Permissive Mode: Not Supported 00:18:21.904 NVM Sets: Not Supported 00:18:21.904 Read Recovery Levels: Not Supported 00:18:21.904 Endurance Groups: Not Supported 00:18:21.904 Predictable Latency Mode: Not Supported 00:18:21.904 Traffic Based Keep ALive: Supported 00:18:21.904 Namespace Granularity: Not Supported 00:18:21.904 SQ Associations: Not Supported 00:18:21.904 UUID List: Not Supported 00:18:21.904 Multi-Domain Subsystem: Not Supported 00:18:21.904 Fixed Capacity Management: Not Supported 00:18:21.904 Variable Capacity Management: Not Supported 00:18:21.904 Delete Endurance Group: Not Supported 00:18:21.904 Delete NVM Set: Not Supported 00:18:21.904 Extended LBA Formats Supported: Not Supported 00:18:21.904 Flexible Data Placement Supported: Not Supported 00:18:21.904 00:18:21.904 Controller Memory Buffer Support 00:18:21.904 ================================ 00:18:21.904 Supported: No 00:18:21.904 00:18:21.904 Persistent Memory Region Support 00:18:21.904 ================================ 00:18:21.904 Supported: No 00:18:21.904 00:18:21.904 Admin Command Set Attributes 00:18:21.904 ============================ 00:18:21.904 Security Send/Receive: Not Supported 00:18:21.904 Format NVM: Not Supported 00:18:21.904 Firmware Activate/Download: Not Supported 00:18:21.904 Namespace Management: Not Supported 00:18:21.904 Device Self-Test: Not Supported 00:18:21.904 Directives: Not Supported 00:18:21.904 NVMe-MI: Not Supported 00:18:21.904 Virtualization Management: Not Supported 00:18:21.904 Doorbell Buffer Config: Not Supported 00:18:21.904 Get LBA Status Capability: Not Supported 00:18:21.904 Command & Feature Lockdown Capability: Not Supported 00:18:21.904 Abort Command Limit: 4 00:18:21.904 Async Event Request Limit: 4 00:18:21.904 Number of Firmware Slots: N/A 00:18:21.904 Firmware Slot 1 Read-Only: N/A 00:18:21.904 Firmware Activation Without Reset: N/A 00:18:21.904 Multiple Update Detection Support: N/A 00:18:21.904 Firmware Update Granularity: No Information Provided 00:18:21.904 Per-Namespace SMART Log: Yes 00:18:21.904 Asymmetric Namespace Access Log Page: Supported 00:18:21.904 ANA Transition Time : 10 sec 00:18:21.904 00:18:21.904 Asymmetric Namespace Access Capabilities 00:18:21.904 ANA Optimized State : Supported 00:18:21.904 ANA Non-Optimized State : Supported 00:18:21.904 ANA Inaccessible State : Supported 00:18:21.904 ANA Persistent Loss State : Supported 00:18:21.904 ANA Change State : Supported 00:18:21.904 ANAGRPID is not changed : No 00:18:21.904 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:21.904 00:18:21.904 ANA Group Identifier Maximum : 128 00:18:21.904 Number of ANA Group Identifiers : 128 00:18:21.904 Max Number of Allowed Namespaces : 1024 00:18:21.904 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:21.904 Command Effects Log Page: Supported 00:18:21.904 Get Log Page Extended Data: Supported 00:18:21.904 Telemetry Log Pages: Not Supported 00:18:21.904 Persistent Event Log Pages: Not Supported 00:18:21.904 Supported Log Pages Log Page: May Support 00:18:21.904 Commands Supported & Effects Log Page: Not Supported 00:18:21.904 Feature Identifiers & Effects Log Page:May Support 00:18:21.904 NVMe-MI Commands & Effects Log Page: May Support 00:18:21.904 Data Area 4 for Telemetry Log: Not Supported 00:18:21.904 Error Log Page Entries Supported: 128 00:18:21.904 Keep Alive: Supported 00:18:21.904 Keep Alive Granularity: 1000 ms 00:18:21.904 00:18:21.904 NVM Command Set Attributes 00:18:21.904 ========================== 00:18:21.904 Submission Queue Entry Size 00:18:21.904 Max: 64 00:18:21.904 Min: 64 00:18:21.904 Completion Queue Entry Size 00:18:21.904 Max: 16 00:18:21.904 Min: 16 00:18:21.904 Number of Namespaces: 1024 00:18:21.904 Compare Command: Not Supported 00:18:21.904 Write Uncorrectable Command: Not Supported 00:18:21.904 Dataset Management Command: Supported 00:18:21.904 Write Zeroes Command: Supported 00:18:21.904 Set Features Save Field: Not Supported 00:18:21.904 Reservations: Not Supported 00:18:21.904 Timestamp: Not Supported 00:18:21.904 Copy: Not Supported 00:18:21.904 Volatile Write Cache: Present 00:18:21.904 Atomic Write Unit (Normal): 1 00:18:21.904 Atomic Write Unit (PFail): 1 00:18:21.904 Atomic Compare & Write Unit: 1 00:18:21.904 Fused Compare & Write: Not Supported 00:18:21.904 Scatter-Gather List 00:18:21.904 SGL Command Set: Supported 00:18:21.904 SGL Keyed: Not Supported 00:18:21.904 SGL Bit Bucket Descriptor: Not Supported 00:18:21.904 SGL Metadata Pointer: Not Supported 00:18:21.904 Oversized SGL: Not Supported 00:18:21.904 SGL Metadata Address: Not Supported 00:18:21.904 SGL Offset: Supported 00:18:21.904 Transport SGL Data Block: Not Supported 00:18:21.904 Replay Protected Memory Block: Not Supported 00:18:21.904 00:18:21.904 Firmware Slot Information 00:18:21.904 ========================= 00:18:21.904 Active slot: 0 00:18:21.904 00:18:21.904 Asymmetric Namespace Access 00:18:21.904 =========================== 00:18:21.904 Change Count : 0 00:18:21.904 Number of ANA Group Descriptors : 1 00:18:21.904 ANA Group Descriptor : 0 00:18:21.904 ANA Group ID : 1 00:18:21.904 Number of NSID Values : 1 00:18:21.904 Change Count : 0 00:18:21.904 ANA State : 1 00:18:21.904 Namespace Identifier : 1 00:18:21.904 00:18:21.904 Commands Supported and Effects 00:18:21.904 ============================== 00:18:21.904 Admin Commands 00:18:21.904 -------------- 00:18:21.904 Get Log Page (02h): Supported 00:18:21.904 Identify (06h): Supported 00:18:21.904 Abort (08h): Supported 00:18:21.904 Set Features (09h): Supported 00:18:21.904 Get Features (0Ah): Supported 00:18:21.904 Asynchronous Event Request (0Ch): Supported 00:18:21.904 Keep Alive (18h): Supported 00:18:21.904 I/O Commands 00:18:21.904 ------------ 00:18:21.904 Flush (00h): Supported 00:18:21.904 Write (01h): Supported LBA-Change 00:18:21.904 Read (02h): Supported 00:18:21.904 Write Zeroes (08h): Supported LBA-Change 00:18:21.904 Dataset Management (09h): Supported 00:18:21.904 00:18:21.904 Error Log 00:18:21.904 ========= 00:18:21.904 Entry: 0 00:18:21.904 Error Count: 0x3 00:18:21.904 Submission Queue Id: 0x0 00:18:21.904 Command Id: 0x5 00:18:21.904 Phase Bit: 0 00:18:21.904 Status Code: 0x2 00:18:21.904 Status Code Type: 0x0 00:18:21.904 Do Not Retry: 1 00:18:21.904 Error Location: 0x28 00:18:21.904 LBA: 0x0 00:18:21.904 Namespace: 0x0 00:18:21.904 Vendor Log Page: 0x0 00:18:21.904 ----------- 00:18:21.904 Entry: 1 00:18:21.904 Error Count: 0x2 00:18:21.905 Submission Queue Id: 0x0 00:18:21.905 Command Id: 0x5 00:18:21.905 Phase Bit: 0 00:18:21.905 Status Code: 0x2 00:18:21.905 Status Code Type: 0x0 00:18:21.905 Do Not Retry: 1 00:18:21.905 Error Location: 0x28 00:18:21.905 LBA: 0x0 00:18:21.905 Namespace: 0x0 00:18:21.905 Vendor Log Page: 0x0 00:18:21.905 ----------- 00:18:21.905 Entry: 2 00:18:21.905 Error Count: 0x1 00:18:21.905 Submission Queue Id: 0x0 00:18:21.905 Command Id: 0x4 00:18:21.905 Phase Bit: 0 00:18:21.905 Status Code: 0x2 00:18:21.905 Status Code Type: 0x0 00:18:21.905 Do Not Retry: 1 00:18:21.905 Error Location: 0x28 00:18:21.905 LBA: 0x0 00:18:21.905 Namespace: 0x0 00:18:21.905 Vendor Log Page: 0x0 00:18:21.905 00:18:21.905 Number of Queues 00:18:21.905 ================ 00:18:21.905 Number of I/O Submission Queues: 128 00:18:21.905 Number of I/O Completion Queues: 128 00:18:21.905 00:18:21.905 ZNS Specific Controller Data 00:18:21.905 ============================ 00:18:21.905 Zone Append Size Limit: 0 00:18:21.905 00:18:21.905 00:18:21.905 Active Namespaces 00:18:21.905 ================= 00:18:21.905 get_feature(0x05) failed 00:18:21.905 Namespace ID:1 00:18:21.905 Command Set Identifier: NVM (00h) 00:18:21.905 Deallocate: Supported 00:18:21.905 Deallocated/Unwritten Error: Not Supported 00:18:21.905 Deallocated Read Value: Unknown 00:18:21.905 Deallocate in Write Zeroes: Not Supported 00:18:21.905 Deallocated Guard Field: 0xFFFF 00:18:21.905 Flush: Supported 00:18:21.905 Reservation: Not Supported 00:18:21.905 Namespace Sharing Capabilities: Multiple Controllers 00:18:21.905 Size (in LBAs): 1310720 (5GiB) 00:18:21.905 Capacity (in LBAs): 1310720 (5GiB) 00:18:21.905 Utilization (in LBAs): 1310720 (5GiB) 00:18:21.905 UUID: d61e137b-f10b-4d62-9913-42aba06c4403 00:18:21.905 Thin Provisioning: Not Supported 00:18:21.905 Per-NS Atomic Units: Yes 00:18:21.905 Atomic Boundary Size (Normal): 0 00:18:21.905 Atomic Boundary Size (PFail): 0 00:18:21.905 Atomic Boundary Offset: 0 00:18:21.905 NGUID/EUI64 Never Reused: No 00:18:21.905 ANA group ID: 1 00:18:21.905 Namespace Write Protected: No 00:18:21.905 Number of LBA Formats: 1 00:18:21.905 Current LBA Format: LBA Format #00 00:18:21.905 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:21.905 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:21.905 rmmod nvme_tcp 00:18:21.905 rmmod nvme_fabrics 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:21.905 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:22.163 10:41:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:23.097 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:23.097 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:23.097 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:23.097 00:18:23.097 real 0m3.218s 00:18:23.097 user 0m1.182s 00:18:23.097 sys 0m1.445s 00:18:23.097 10:41:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:23.097 ************************************ 00:18:23.097 END TEST nvmf_identify_kernel_target 00:18:23.097 10:41:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.097 ************************************ 00:18:23.097 10:41:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:23.097 10:41:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:23.097 10:41:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:23.097 10:41:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.097 ************************************ 00:18:23.097 START TEST nvmf_auth_host 00:18:23.097 ************************************ 00:18:23.097 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:23.357 * Looking for test storage... 00:18:23.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.357 --rc genhtml_branch_coverage=1 00:18:23.357 --rc genhtml_function_coverage=1 00:18:23.357 --rc genhtml_legend=1 00:18:23.357 --rc geninfo_all_blocks=1 00:18:23.357 --rc geninfo_unexecuted_blocks=1 00:18:23.357 00:18:23.357 ' 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.357 --rc genhtml_branch_coverage=1 00:18:23.357 --rc genhtml_function_coverage=1 00:18:23.357 --rc genhtml_legend=1 00:18:23.357 --rc geninfo_all_blocks=1 00:18:23.357 --rc geninfo_unexecuted_blocks=1 00:18:23.357 00:18:23.357 ' 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.357 --rc genhtml_branch_coverage=1 00:18:23.357 --rc genhtml_function_coverage=1 00:18:23.357 --rc genhtml_legend=1 00:18:23.357 --rc geninfo_all_blocks=1 00:18:23.357 --rc geninfo_unexecuted_blocks=1 00:18:23.357 00:18:23.357 ' 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.357 --rc genhtml_branch_coverage=1 00:18:23.357 --rc genhtml_function_coverage=1 00:18:23.357 --rc genhtml_legend=1 00:18:23.357 --rc geninfo_all_blocks=1 00:18:23.357 --rc geninfo_unexecuted_blocks=1 00:18:23.357 00:18:23.357 ' 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.357 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:23.358 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:23.358 Cannot find device "nvmf_init_br" 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:23.358 Cannot find device "nvmf_init_br2" 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:23.358 Cannot find device "nvmf_tgt_br" 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:23.358 Cannot find device "nvmf_tgt_br2" 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:23.358 Cannot find device "nvmf_init_br" 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:18:23.358 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:23.616 Cannot find device "nvmf_init_br2" 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:23.616 Cannot find device "nvmf_tgt_br" 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:23.616 Cannot find device "nvmf_tgt_br2" 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:23.616 Cannot find device "nvmf_br" 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:23.616 Cannot find device "nvmf_init_if" 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:23.616 Cannot find device "nvmf_init_if2" 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:23.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:23.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:23.616 10:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:23.616 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:23.616 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:23.616 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:23.616 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:23.616 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:23.616 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:23.617 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:23.617 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:23.617 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:23.617 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:23.617 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:23.617 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:23.617 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:23.617 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:23.617 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:23.617 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:23.875 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:23.875 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:18:23.875 00:18:23.875 --- 10.0.0.3 ping statistics --- 00:18:23.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.875 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:23.875 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:23.875 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:18:23.875 00:18:23.875 --- 10.0.0.4 ping statistics --- 00:18:23.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.875 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:23.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:23.875 00:18:23.875 --- 10.0.0.1 ping statistics --- 00:18:23.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.875 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:23.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:18:23.875 00:18:23.875 --- 10.0.0.2 ping statistics --- 00:18:23.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.875 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78574 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78574 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78574 ']' 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:23.875 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.133 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:24.133 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:18:24.133 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:24.133 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:24.133 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=643eaccbebf9362b501500ac7c517b90 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rsm 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 643eaccbebf9362b501500ac7c517b90 0 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 643eaccbebf9362b501500ac7c517b90 0 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=643eaccbebf9362b501500ac7c517b90 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rsm 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rsm 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.rsm 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b435808aac3ef9b36a3c146440c786c5b7f8c2377927cef33f299460ab47aa9e 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.RL2 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b435808aac3ef9b36a3c146440c786c5b7f8c2377927cef33f299460ab47aa9e 3 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b435808aac3ef9b36a3c146440c786c5b7f8c2377927cef33f299460ab47aa9e 3 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b435808aac3ef9b36a3c146440c786c5b7f8c2377927cef33f299460ab47aa9e 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.RL2 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.RL2 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.RL2 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=05d433e1fc595b7726e3d5ca60de31de2b06219f6e97b791 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Tfo 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 05d433e1fc595b7726e3d5ca60de31de2b06219f6e97b791 0 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 05d433e1fc595b7726e3d5ca60de31de2b06219f6e97b791 0 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=05d433e1fc595b7726e3d5ca60de31de2b06219f6e97b791 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Tfo 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Tfo 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Tfo 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=45e87905c705ce7114c0062cf0d477b23e7d08c2e593da7d 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.O9p 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 45e87905c705ce7114c0062cf0d477b23e7d08c2e593da7d 2 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 45e87905c705ce7114c0062cf0d477b23e7d08c2e593da7d 2 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=45e87905c705ce7114c0062cf0d477b23e7d08c2e593da7d 00:18:24.392 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:24.393 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.O9p 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.O9p 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.O9p 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=92cd8461544e8f21ed04c3029580c35d 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kaX 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 92cd8461544e8f21ed04c3029580c35d 1 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 92cd8461544e8f21ed04c3029580c35d 1 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=92cd8461544e8f21ed04c3029580c35d 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kaX 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kaX 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.kaX 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ae551f43e035b602434cf2321e991ca4 00:18:24.651 10:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.hTX 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ae551f43e035b602434cf2321e991ca4 1 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ae551f43e035b602434cf2321e991ca4 1 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ae551f43e035b602434cf2321e991ca4 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.hTX 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.hTX 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.hTX 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=46a89f6368c48879ea3f6c2e46dbc4ef406762c105cf9fff 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lwP 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 46a89f6368c48879ea3f6c2e46dbc4ef406762c105cf9fff 2 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 46a89f6368c48879ea3f6c2e46dbc4ef406762c105cf9fff 2 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=46a89f6368c48879ea3f6c2e46dbc4ef406762c105cf9fff 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lwP 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lwP 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.lwP 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=57c0976ce917409e811c15a83558cccd 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jnb 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 57c0976ce917409e811c15a83558cccd 0 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 57c0976ce917409e811c15a83558cccd 0 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=57c0976ce917409e811c15a83558cccd 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:24.651 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jnb 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jnb 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.jnb 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b9ef599afbcece071021906ad85b668207d469c0c7b51f46876dea69b4015c50 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ReZ 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b9ef599afbcece071021906ad85b668207d469c0c7b51f46876dea69b4015c50 3 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b9ef599afbcece071021906ad85b668207d469c0c7b51f46876dea69b4015c50 3 00:18:24.909 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b9ef599afbcece071021906ad85b668207d469c0c7b51f46876dea69b4015c50 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ReZ 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ReZ 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ReZ 00:18:24.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78574 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78574 ']' 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:24.910 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rsm 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.RL2 ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RL2 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Tfo 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.O9p ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.O9p 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.kaX 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.hTX ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hTX 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.lwP 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.jnb ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.jnb 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ReZ 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:25.168 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:25.169 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:25.169 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:25.169 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:25.169 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:25.169 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:25.169 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:18:25.169 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:25.169 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:25.429 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:25.429 10:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:25.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:25.686 Waiting for block devices as requested 00:18:25.686 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:25.686 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:26.251 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:26.251 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:26.251 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:26.251 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:26.251 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:26.251 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:26.251 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:26.251 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:26.251 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:26.251 No valid GPT data, bailing 00:18:26.251 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:26.509 No valid GPT data, bailing 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:26.509 No valid GPT data, bailing 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:26.509 No valid GPT data, bailing 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:26.509 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:26.510 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:26.510 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:26.510 10:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:26.510 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:26.510 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:18:26.510 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:26.510 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:18:26.510 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:26.510 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:18:26.510 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:18:26.510 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:18:26.510 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid=50e4d619-cecf-4dd2-989d-1336dee31d8f -a 10.0.0.1 -t tcp -s 4420 00:18:26.768 00:18:26.768 Discovery Log Number of Records 2, Generation counter 2 00:18:26.768 =====Discovery Log Entry 0====== 00:18:26.768 trtype: tcp 00:18:26.768 adrfam: ipv4 00:18:26.768 subtype: current discovery subsystem 00:18:26.768 treq: not specified, sq flow control disable supported 00:18:26.768 portid: 1 00:18:26.768 trsvcid: 4420 00:18:26.768 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:26.768 traddr: 10.0.0.1 00:18:26.768 eflags: none 00:18:26.768 sectype: none 00:18:26.768 =====Discovery Log Entry 1====== 00:18:26.768 trtype: tcp 00:18:26.768 adrfam: ipv4 00:18:26.768 subtype: nvme subsystem 00:18:26.768 treq: not specified, sq flow control disable supported 00:18:26.768 portid: 1 00:18:26.768 trsvcid: 4420 00:18:26.768 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:26.768 traddr: 10.0.0.1 00:18:26.768 eflags: none 00:18:26.768 sectype: none 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:26.768 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.769 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.028 nvme0n1 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.028 nvme0n1 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.028 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.029 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.029 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.029 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.288 nvme0n1 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:27.288 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.289 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.548 nvme0n1 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.548 10:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.548 nvme0n1 00:18:27.548 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.548 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.548 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.548 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.548 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.548 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.807 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.808 nvme0n1 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.808 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.374 nvme0n1 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:28.374 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.375 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.634 nvme0n1 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.634 10:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.634 nvme0n1 00:18:28.634 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.634 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.634 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.634 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.634 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.634 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.894 nvme0n1 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.894 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.153 nvme0n1 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:29.153 10:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.090 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.091 nvme0n1 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.091 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.433 nvme0n1 00:18:30.433 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.433 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.433 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.433 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.433 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.433 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.434 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.693 nvme0n1 00:18:30.693 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.693 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.693 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.693 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.693 10:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.693 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.952 nvme0n1 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.952 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.953 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.212 nvme0n1 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:31.212 10:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.112 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.371 nvme0n1 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:33.371 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.372 10:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.940 nvme0n1 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.940 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.201 nvme0n1 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:34.201 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.459 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.460 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.460 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.460 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.460 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.460 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:34.460 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.460 10:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.717 nvme0n1 00:18:34.717 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.717 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.717 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.718 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.284 nvme0n1 00:18:35.284 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.284 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.284 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.284 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.284 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.284 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.284 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.285 10:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.849 nvme0n1 00:18:35.849 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.849 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.849 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.849 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.849 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.849 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.849 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.849 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.850 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.414 nvme0n1 00:18:36.414 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.672 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.672 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.672 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.672 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.672 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.673 10:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.239 nvme0n1 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.239 10:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.804 nvme0n1 00:18:37.804 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:38.064 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.065 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.630 nvme0n1 00:18:38.630 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.630 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.630 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.630 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.630 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.630 10:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:38.630 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.631 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.889 nvme0n1 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.889 nvme0n1 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.889 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.890 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.890 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.890 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.890 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.148 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.148 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.148 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.148 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.148 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.148 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.148 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:39.148 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.148 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:39.148 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:39.148 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.149 nvme0n1 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.149 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.424 nvme0n1 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.425 nvme0n1 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.425 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.685 10:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.685 nvme0n1 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:39.685 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.686 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.946 nvme0n1 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:39.946 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.947 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.206 nvme0n1 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.206 nvme0n1 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.206 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.466 nvme0n1 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:40.466 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.467 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.725 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.725 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.725 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.725 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.725 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.725 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.725 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.725 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.725 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.725 10:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.725 nvme0n1 00:18:40.726 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.726 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.726 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.726 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.726 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.726 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.726 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.726 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.726 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.726 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.984 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.985 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.985 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.985 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.985 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.985 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.985 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.985 nvme0n1 00:18:40.985 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.985 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.985 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.985 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.985 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.985 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.244 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.244 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.244 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.244 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.245 nvme0n1 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.245 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:41.506 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.507 nvme0n1 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.507 10:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.777 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:41.778 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.778 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:41.778 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:41.778 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:41.778 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:41.778 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.778 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.778 nvme0n1 00:18:41.778 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.037 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.038 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.296 nvme0n1 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:42.296 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.297 10:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.864 nvme0n1 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.864 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.123 nvme0n1 00:18:43.123 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.123 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.123 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.124 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.382 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.642 nvme0n1 00:18:43.642 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.642 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.642 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.642 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.642 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.642 10:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.642 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.224 nvme0n1 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.224 10:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.792 nvme0n1 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.792 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.360 nvme0n1 00:18:45.360 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.360 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.360 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.360 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.360 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.360 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.619 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.619 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.619 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.619 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.619 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.619 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.620 10:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.187 nvme0n1 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:46.187 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.188 10:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.756 nvme0n1 00:18:46.756 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.756 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.756 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.756 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.756 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.756 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.756 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.756 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.756 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.756 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.016 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.584 nvme0n1 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:47.584 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.585 10:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.585 nvme0n1 00:18:47.585 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.585 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.585 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.585 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.585 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.845 nvme0n1 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.845 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.105 nvme0n1 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.105 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.365 nvme0n1 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.365 nvme0n1 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:48.365 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:48.366 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:48.366 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:48.366 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.366 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:48.366 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:48.366 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:48.366 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.366 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:48.366 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.366 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.626 nvme0n1 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.626 10:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:48.626 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:48.627 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.627 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.627 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.885 nvme0n1 00:18:48.885 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.885 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.885 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.885 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.885 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.885 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.885 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.886 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.144 nvme0n1 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.144 nvme0n1 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.144 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:49.402 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.403 nvme0n1 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.403 10:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.662 nvme0n1 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.662 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.922 nvme0n1 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.922 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.181 nvme0n1 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:50.181 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.182 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.441 nvme0n1 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:50.441 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.700 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.700 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:50.700 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.700 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:50.700 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:50.700 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:50.700 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:50.700 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.700 10:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.700 nvme0n1 00:18:50.700 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.700 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.700 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.700 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.700 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.700 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.700 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.700 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.700 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.700 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.958 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.959 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.218 nvme0n1 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.218 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.786 nvme0n1 00:18:51.786 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.786 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.786 10:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.786 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.046 nvme0n1 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.046 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.698 nvme0n1 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:52.698 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:52.699 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:52.699 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.699 10:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.960 nvme0n1 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQzZWFjY2JlYmY5MzYyYjUwMTUwMGFjN2M1MTdiOTDE6J5u: 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: ]] 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQzNTgwOGFhYzNlZjliMzZhM2MxNDY0NDBjNzg2YzViN2Y4YzIzNzc5MjdjZWYzM2YyOTk0NjBhYjQ3YWE5ZQ0Fst8=: 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.960 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.528 nvme0n1 00:18:53.528 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.528 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.528 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.528 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.528 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.528 10:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.528 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.528 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.528 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.528 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.788 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.356 nvme0n1 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.356 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.357 10:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.923 nvme0n1 00:18:54.923 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.923 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.923 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.923 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.923 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.923 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZhODlmNjM2OGM0ODg3OWVhM2Y2YzJlNDZkYmM0ZWY0MDY3NjJjMTA1Y2Y5ZmZmC4wQfQ==: 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: ]] 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdjMDk3NmNlOTE3NDA5ZTgxMWMxNWE4MzU1OGNjY2Tdohpd: 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.182 10:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.748 nvme0n1 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjllZjU5OWFmYmNlY2UwNzEwMjE5MDZhZDg1YjY2ODIwN2Q0NjljMGM3YjUxZjQ2ODc2ZGVhNjliNDAxNWM1MDM2JLI=: 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:55.748 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.749 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.684 nvme0n1 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.685 request: 00:18:56.685 { 00:18:56.685 "name": "nvme0", 00:18:56.685 "trtype": "tcp", 00:18:56.685 "traddr": "10.0.0.1", 00:18:56.685 "adrfam": "ipv4", 00:18:56.685 "trsvcid": "4420", 00:18:56.685 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:56.685 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:56.685 "prchk_reftag": false, 00:18:56.685 "prchk_guard": false, 00:18:56.685 "hdgst": false, 00:18:56.685 "ddgst": false, 00:18:56.685 "allow_unrecognized_csi": false, 00:18:56.685 "method": "bdev_nvme_attach_controller", 00:18:56.685 "req_id": 1 00:18:56.685 } 00:18:56.685 Got JSON-RPC error response 00:18:56.685 response: 00:18:56.685 { 00:18:56.685 "code": -5, 00:18:56.685 "message": "Input/output error" 00:18:56.685 } 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:56.685 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:56.686 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:56.686 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:56.686 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.686 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:56.686 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.686 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:56.686 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.686 10:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.686 request: 00:18:56.686 { 00:18:56.686 "name": "nvme0", 00:18:56.686 "trtype": "tcp", 00:18:56.686 "traddr": "10.0.0.1", 00:18:56.686 "adrfam": "ipv4", 00:18:56.686 "trsvcid": "4420", 00:18:56.686 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:56.686 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:56.686 "prchk_reftag": false, 00:18:56.686 "prchk_guard": false, 00:18:56.686 "hdgst": false, 00:18:56.686 "ddgst": false, 00:18:56.686 "dhchap_key": "key2", 00:18:56.686 "allow_unrecognized_csi": false, 00:18:56.686 "method": "bdev_nvme_attach_controller", 00:18:56.686 "req_id": 1 00:18:56.686 } 00:18:56.686 Got JSON-RPC error response 00:18:56.686 response: 00:18:56.686 { 00:18:56.686 "code": -5, 00:18:56.686 "message": "Input/output error" 00:18:56.686 } 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.686 request: 00:18:56.686 { 00:18:56.686 "name": "nvme0", 00:18:56.686 "trtype": "tcp", 00:18:56.686 "traddr": "10.0.0.1", 00:18:56.686 "adrfam": "ipv4", 00:18:56.686 "trsvcid": "4420", 00:18:56.686 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:56.686 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:56.686 "prchk_reftag": false, 00:18:56.686 "prchk_guard": false, 00:18:56.686 "hdgst": false, 00:18:56.686 "ddgst": false, 00:18:56.686 "dhchap_key": "key1", 00:18:56.686 "dhchap_ctrlr_key": "ckey2", 00:18:56.686 "allow_unrecognized_csi": false, 00:18:56.686 "method": "bdev_nvme_attach_controller", 00:18:56.686 "req_id": 1 00:18:56.686 } 00:18:56.686 Got JSON-RPC error response 00:18:56.686 response: 00:18:56.686 { 00:18:56.686 "code": -5, 00:18:56.686 "message": "Input/output error" 00:18:56.686 } 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.686 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.945 nvme0n1 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:56.945 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.946 request: 00:18:56.946 { 00:18:56.946 "name": "nvme0", 00:18:56.946 "dhchap_key": "key1", 00:18:56.946 "dhchap_ctrlr_key": "ckey2", 00:18:56.946 "method": "bdev_nvme_set_keys", 00:18:56.946 "req_id": 1 00:18:56.946 } 00:18:56.946 Got JSON-RPC error response 00:18:56.946 response: 00:18:56.946 { 00:18:56.946 "code": -5, 00:18:56.946 "message": "Input/output error" 00:18:56.946 } 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:18:56.946 10:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:18:58.345 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.345 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:58.345 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.345 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.345 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.345 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:18:58.345 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:58.345 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.345 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:58.345 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:58.345 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkNDMzZTFmYzU5NWI3NzI2ZTNkNWNhNjBkZTMxZGUyYjA2MjE5ZjZlOTdiNzkxd7dUHQ==: 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: ]] 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDVlODc5MDVjNzA1Y2U3MTE0YzAwNjJjZjBkNDc3YjIzZTdkMDhjMmU1OTNkYTdkYGXQtg==: 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.346 nvme0n1 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTJjZDg0NjE1NDRlOGYyMWVkMDRjMzAyOTU4MGMzNWTMyBJp: 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: ]] 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU1NTFmNDNlMDM1YjYwMjQzNGNmMjMyMWU5OTFjYTQB14Xa: 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.346 request: 00:18:58.346 { 00:18:58.346 "name": "nvme0", 00:18:58.346 "dhchap_key": "key2", 00:18:58.346 "dhchap_ctrlr_key": "ckey1", 00:18:58.346 "method": "bdev_nvme_set_keys", 00:18:58.346 "req_id": 1 00:18:58.346 } 00:18:58.346 Got JSON-RPC error response 00:18:58.346 response: 00:18:58.346 { 00:18:58.346 "code": -13, 00:18:58.346 "message": "Permission denied" 00:18:58.346 } 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:18:58.346 10:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:59.281 rmmod nvme_tcp 00:18:59.281 rmmod nvme_fabrics 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78574 ']' 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78574 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 78574 ']' 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 78574 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:18:59.281 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:59.282 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78574 00:18:59.539 killing process with pid 78574 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78574' 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 78574 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 78574 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:59.539 10:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:59.539 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:59.539 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:59.797 10:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:00.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:00.734 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:00.734 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:00.734 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rsm /tmp/spdk.key-null.Tfo /tmp/spdk.key-sha256.kaX /tmp/spdk.key-sha384.lwP /tmp/spdk.key-sha512.ReZ /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:00.734 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:01.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:01.301 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:01.301 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:01.301 00:19:01.301 real 0m37.997s 00:19:01.301 user 0m34.264s 00:19:01.301 sys 0m3.811s 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:01.301 ************************************ 00:19:01.301 END TEST nvmf_auth_host 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.301 ************************************ 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.301 ************************************ 00:19:01.301 START TEST nvmf_digest 00:19:01.301 ************************************ 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:01.301 * Looking for test storage... 00:19:01.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:01.301 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:19:01.302 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:19:01.302 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.302 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.302 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:19:01.302 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:01.302 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:01.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.302 --rc genhtml_branch_coverage=1 00:19:01.302 --rc genhtml_function_coverage=1 00:19:01.302 --rc genhtml_legend=1 00:19:01.302 --rc geninfo_all_blocks=1 00:19:01.302 --rc geninfo_unexecuted_blocks=1 00:19:01.302 00:19:01.302 ' 00:19:01.302 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:01.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.302 --rc genhtml_branch_coverage=1 00:19:01.302 --rc genhtml_function_coverage=1 00:19:01.302 --rc genhtml_legend=1 00:19:01.302 --rc geninfo_all_blocks=1 00:19:01.302 --rc geninfo_unexecuted_blocks=1 00:19:01.302 00:19:01.302 ' 00:19:01.302 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:01.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.302 --rc genhtml_branch_coverage=1 00:19:01.302 --rc genhtml_function_coverage=1 00:19:01.302 --rc genhtml_legend=1 00:19:01.302 --rc geninfo_all_blocks=1 00:19:01.302 --rc geninfo_unexecuted_blocks=1 00:19:01.302 00:19:01.302 ' 00:19:01.302 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:01.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.302 --rc genhtml_branch_coverage=1 00:19:01.302 --rc genhtml_function_coverage=1 00:19:01.302 --rc genhtml_legend=1 00:19:01.302 --rc geninfo_all_blocks=1 00:19:01.302 --rc geninfo_unexecuted_blocks=1 00:19:01.302 00:19:01.302 ' 00:19:01.302 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:01.302 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.561 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:01.562 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:01.562 Cannot find device "nvmf_init_br" 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:01.562 Cannot find device "nvmf_init_br2" 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:01.562 Cannot find device "nvmf_tgt_br" 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:01.562 Cannot find device "nvmf_tgt_br2" 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:01.562 Cannot find device "nvmf_init_br" 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:01.562 Cannot find device "nvmf_init_br2" 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:01.562 Cannot find device "nvmf_tgt_br" 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:01.562 Cannot find device "nvmf_tgt_br2" 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:01.562 Cannot find device "nvmf_br" 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:01.562 Cannot find device "nvmf_init_if" 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:01.562 Cannot find device "nvmf_init_if2" 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:01.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:01.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:01.562 10:42:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:01.562 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:01.562 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:01.562 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:01.562 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:01.562 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:01.562 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:01.562 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:01.821 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:01.821 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:01.821 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:01.821 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:01.821 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:01.821 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:01.822 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:01.822 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:19:01.822 00:19:01.822 --- 10.0.0.3 ping statistics --- 00:19:01.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.822 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:01.822 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:01.822 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:19:01.822 00:19:01.822 --- 10.0.0.4 ping statistics --- 00:19:01.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.822 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:01.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:01.822 00:19:01.822 --- 10.0.0.1 ping statistics --- 00:19:01.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.822 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:01.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:19:01.822 00:19:01.822 --- 10.0.0.2 ping statistics --- 00:19:01.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.822 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:01.822 ************************************ 00:19:01.822 START TEST nvmf_digest_clean 00:19:01.822 ************************************ 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:01.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80224 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80224 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80224 ']' 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:01.822 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:01.822 [2024-11-15 10:42:27.298105] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:01.822 [2024-11-15 10:42:27.298372] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.081 [2024-11-15 10:42:27.446966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.081 [2024-11-15 10:42:27.516241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.081 [2024-11-15 10:42:27.516524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.081 [2024-11-15 10:42:27.516698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.081 [2024-11-15 10:42:27.516870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.081 [2024-11-15 10:42:27.516986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.081 [2024-11-15 10:42:27.517420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.081 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:02.081 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:19:02.081 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:02.081 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:02.081 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:02.340 [2024-11-15 10:42:27.663677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:02.340 null0 00:19:02.340 [2024-11-15 10:42:27.719182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.340 [2024-11-15 10:42:27.743327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80250 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80250 /var/tmp/bperf.sock 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80250 ']' 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:02.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:02.340 10:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:02.340 [2024-11-15 10:42:27.807592] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:02.340 [2024-11-15 10:42:27.807889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80250 ] 00:19:02.598 [2024-11-15 10:42:27.950341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.598 [2024-11-15 10:42:28.001258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.598 10:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:02.598 10:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:19:02.598 10:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:02.598 10:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:02.598 10:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:02.959 [2024-11-15 10:42:28.367037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:02.959 10:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:02.959 10:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:03.526 nvme0n1 00:19:03.526 10:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:03.526 10:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:03.526 Running I/O for 2 seconds... 00:19:05.836 14605.00 IOPS, 57.05 MiB/s [2024-11-15T10:42:31.334Z] 14795.50 IOPS, 57.79 MiB/s 00:19:05.836 Latency(us) 00:19:05.836 [2024-11-15T10:42:31.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.836 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:05.836 nvme0n1 : 2.01 14782.03 57.74 0.00 0.00 8652.65 8043.05 23831.27 00:19:05.836 [2024-11-15T10:42:31.334Z] =================================================================================================================== 00:19:05.836 [2024-11-15T10:42:31.334Z] Total : 14782.03 57.74 0.00 0.00 8652.65 8043.05 23831.27 00:19:05.836 { 00:19:05.836 "results": [ 00:19:05.836 { 00:19:05.836 "job": "nvme0n1", 00:19:05.836 "core_mask": "0x2", 00:19:05.836 "workload": "randread", 00:19:05.836 "status": "finished", 00:19:05.836 "queue_depth": 128, 00:19:05.836 "io_size": 4096, 00:19:05.836 "runtime": 2.010481, 00:19:05.836 "iops": 14782.034746908825, 00:19:05.836 "mibps": 57.742323230112596, 00:19:05.836 "io_failed": 0, 00:19:05.836 "io_timeout": 0, 00:19:05.836 "avg_latency_us": 8652.650867856193, 00:19:05.836 "min_latency_us": 8043.054545454545, 00:19:05.836 "max_latency_us": 23831.272727272728 00:19:05.836 } 00:19:05.836 ], 00:19:05.836 "core_count": 1 00:19:05.836 } 00:19:05.836 10:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:05.836 10:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:05.836 10:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:05.836 10:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:05.836 10:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:05.836 | select(.opcode=="crc32c") 00:19:05.836 | "\(.module_name) \(.executed)"' 00:19:05.836 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:05.836 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:05.836 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:05.836 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:05.836 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80250 00:19:05.836 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80250 ']' 00:19:05.836 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80250 00:19:05.836 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:19:05.836 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:05.836 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80250 00:19:05.836 killing process with pid 80250 00:19:05.836 Received shutdown signal, test time was about 2.000000 seconds 00:19:05.836 00:19:05.837 Latency(us) 00:19:05.837 [2024-11-15T10:42:31.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.837 [2024-11-15T10:42:31.335Z] =================================================================================================================== 00:19:05.837 [2024-11-15T10:42:31.335Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.837 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:05.837 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:05.837 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80250' 00:19:05.837 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80250 00:19:05.837 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80250 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80297 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80297 /var/tmp/bperf.sock 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80297 ']' 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:06.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:06.095 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:06.095 [2024-11-15 10:42:31.555503] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:06.095 [2024-11-15 10:42:31.555821] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80297 ] 00:19:06.095 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:06.095 Zero copy mechanism will not be used. 00:19:06.354 [2024-11-15 10:42:31.704047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.354 [2024-11-15 10:42:31.763728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.354 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:06.354 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:19:06.354 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:06.354 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:06.354 10:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:06.920 [2024-11-15 10:42:32.162629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:06.920 10:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:06.920 10:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:07.178 nvme0n1 00:19:07.178 10:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:07.178 10:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:07.178 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:07.178 Zero copy mechanism will not be used. 00:19:07.178 Running I/O for 2 seconds... 00:19:09.486 7344.00 IOPS, 918.00 MiB/s [2024-11-15T10:42:34.984Z] 7464.00 IOPS, 933.00 MiB/s 00:19:09.486 Latency(us) 00:19:09.486 [2024-11-15T10:42:34.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.486 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:09.486 nvme0n1 : 2.00 7460.98 932.62 0.00 0.00 2140.87 1891.61 9770.82 00:19:09.486 [2024-11-15T10:42:34.984Z] =================================================================================================================== 00:19:09.486 [2024-11-15T10:42:34.984Z] Total : 7460.98 932.62 0.00 0.00 2140.87 1891.61 9770.82 00:19:09.486 { 00:19:09.486 "results": [ 00:19:09.486 { 00:19:09.486 "job": "nvme0n1", 00:19:09.486 "core_mask": "0x2", 00:19:09.486 "workload": "randread", 00:19:09.486 "status": "finished", 00:19:09.486 "queue_depth": 16, 00:19:09.486 "io_size": 131072, 00:19:09.486 "runtime": 2.002955, 00:19:09.486 "iops": 7460.976407358128, 00:19:09.486 "mibps": 932.622050919766, 00:19:09.486 "io_failed": 0, 00:19:09.486 "io_timeout": 0, 00:19:09.486 "avg_latency_us": 2140.8691454156124, 00:19:09.486 "min_latency_us": 1891.6072727272726, 00:19:09.486 "max_latency_us": 9770.821818181817 00:19:09.486 } 00:19:09.486 ], 00:19:09.486 "core_count": 1 00:19:09.486 } 00:19:09.486 10:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:09.486 10:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:09.486 10:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:09.486 10:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:09.486 10:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:09.486 | select(.opcode=="crc32c") 00:19:09.486 | "\(.module_name) \(.executed)"' 00:19:09.744 10:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:09.744 10:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:09.744 10:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:09.744 10:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:09.744 10:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80297 00:19:09.744 10:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80297 ']' 00:19:09.744 10:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80297 00:19:09.744 10:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:19:09.744 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:09.744 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80297 00:19:09.745 killing process with pid 80297 00:19:09.745 Received shutdown signal, test time was about 2.000000 seconds 00:19:09.745 00:19:09.745 Latency(us) 00:19:09.745 [2024-11-15T10:42:35.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.745 [2024-11-15T10:42:35.243Z] =================================================================================================================== 00:19:09.745 [2024-11-15T10:42:35.243Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80297' 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80297 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80297 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80350 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80350 /var/tmp/bperf.sock 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80350 ']' 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:09.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:09.745 10:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:10.002 [2024-11-15 10:42:35.286952] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:10.003 [2024-11-15 10:42:35.287315] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80350 ] 00:19:10.003 [2024-11-15 10:42:35.432347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.003 [2024-11-15 10:42:35.490803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.937 10:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:10.937 10:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:19:10.937 10:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:10.937 10:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:10.937 10:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:11.195 [2024-11-15 10:42:36.684391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:11.452 10:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:11.452 10:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:11.711 nvme0n1 00:19:11.711 10:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:11.711 10:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:11.711 Running I/O for 2 seconds... 00:19:14.020 15495.00 IOPS, 60.53 MiB/s [2024-11-15T10:42:39.518Z] 15367.50 IOPS, 60.03 MiB/s 00:19:14.020 Latency(us) 00:19:14.020 [2024-11-15T10:42:39.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.020 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:14.020 nvme0n1 : 2.00 15404.50 60.17 0.00 0.00 8301.95 2770.39 16086.11 00:19:14.020 [2024-11-15T10:42:39.518Z] =================================================================================================================== 00:19:14.020 [2024-11-15T10:42:39.518Z] Total : 15404.50 60.17 0.00 0.00 8301.95 2770.39 16086.11 00:19:14.020 { 00:19:14.020 "results": [ 00:19:14.020 { 00:19:14.020 "job": "nvme0n1", 00:19:14.020 "core_mask": "0x2", 00:19:14.020 "workload": "randwrite", 00:19:14.020 "status": "finished", 00:19:14.020 "queue_depth": 128, 00:19:14.020 "io_size": 4096, 00:19:14.020 "runtime": 2.003506, 00:19:14.020 "iops": 15404.495918654598, 00:19:14.020 "mibps": 60.173812182244525, 00:19:14.020 "io_failed": 0, 00:19:14.020 "io_timeout": 0, 00:19:14.020 "avg_latency_us": 8301.953384841512, 00:19:14.020 "min_latency_us": 2770.3854545454546, 00:19:14.020 "max_latency_us": 16086.10909090909 00:19:14.020 } 00:19:14.020 ], 00:19:14.020 "core_count": 1 00:19:14.020 } 00:19:14.020 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:14.020 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:14.020 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:14.020 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:14.020 | select(.opcode=="crc32c") 00:19:14.020 | "\(.module_name) \(.executed)"' 00:19:14.020 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80350 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80350 ']' 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80350 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80350 00:19:14.278 killing process with pid 80350 00:19:14.278 Received shutdown signal, test time was about 2.000000 seconds 00:19:14.278 00:19:14.278 Latency(us) 00:19:14.278 [2024-11-15T10:42:39.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.278 [2024-11-15T10:42:39.776Z] =================================================================================================================== 00:19:14.278 [2024-11-15T10:42:39.776Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80350' 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80350 00:19:14.278 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80350 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80411 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80411 /var/tmp/bperf.sock 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80411 ']' 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:14.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:14.537 10:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:14.537 [2024-11-15 10:42:39.839414] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:14.537 [2024-11-15 10:42:39.839501] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:19:14.537 Zero copy mechanism will not be used. 00:19:14.537 llocations --file-prefix=spdk_pid80411 ] 00:19:14.537 [2024-11-15 10:42:39.988365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.795 [2024-11-15 10:42:40.046748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.795 10:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:14.795 10:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:19:14.795 10:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:14.795 10:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:14.795 10:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:15.053 [2024-11-15 10:42:40.452142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:15.053 10:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:15.053 10:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:15.619 nvme0n1 00:19:15.619 10:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:15.619 10:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:15.619 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:15.619 Zero copy mechanism will not be used. 00:19:15.619 Running I/O for 2 seconds... 00:19:17.944 6183.00 IOPS, 772.88 MiB/s [2024-11-15T10:42:43.442Z] 6620.00 IOPS, 827.50 MiB/s 00:19:17.944 Latency(us) 00:19:17.944 [2024-11-15T10:42:43.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.944 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:17.944 nvme0n1 : 2.00 6618.04 827.25 0.00 0.00 2411.81 1526.69 8340.95 00:19:17.944 [2024-11-15T10:42:43.442Z] =================================================================================================================== 00:19:17.944 [2024-11-15T10:42:43.442Z] Total : 6618.04 827.25 0.00 0.00 2411.81 1526.69 8340.95 00:19:17.944 { 00:19:17.944 "results": [ 00:19:17.944 { 00:19:17.944 "job": "nvme0n1", 00:19:17.944 "core_mask": "0x2", 00:19:17.944 "workload": "randwrite", 00:19:17.944 "status": "finished", 00:19:17.944 "queue_depth": 16, 00:19:17.944 "io_size": 131072, 00:19:17.944 "runtime": 2.004068, 00:19:17.944 "iops": 6618.03890885938, 00:19:17.944 "mibps": 827.2548636074225, 00:19:17.944 "io_failed": 0, 00:19:17.944 "io_timeout": 0, 00:19:17.944 "avg_latency_us": 2411.807386509291, 00:19:17.944 "min_latency_us": 1526.6909090909091, 00:19:17.944 "max_latency_us": 8340.945454545454 00:19:17.944 } 00:19:17.944 ], 00:19:17.944 "core_count": 1 00:19:17.944 } 00:19:17.944 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:17.944 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:17.944 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:17.944 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:17.944 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:17.944 | select(.opcode=="crc32c") 00:19:17.944 | "\(.module_name) \(.executed)"' 00:19:17.944 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:17.944 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:17.944 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:17.944 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:17.944 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80411 00:19:17.944 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80411 ']' 00:19:17.944 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80411 00:19:17.944 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:19:17.945 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:17.945 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80411 00:19:17.945 killing process with pid 80411 00:19:17.945 Received shutdown signal, test time was about 2.000000 seconds 00:19:17.945 00:19:17.945 Latency(us) 00:19:17.945 [2024-11-15T10:42:43.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.945 [2024-11-15T10:42:43.443Z] =================================================================================================================== 00:19:17.945 [2024-11-15T10:42:43.443Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.945 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:17.945 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:17.945 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80411' 00:19:17.945 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80411 00:19:17.945 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80411 00:19:18.204 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80224 00:19:18.204 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80224 ']' 00:19:18.204 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80224 00:19:18.204 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:19:18.204 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:18.204 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80224 00:19:18.204 killing process with pid 80224 00:19:18.204 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:18.204 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:18.204 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80224' 00:19:18.204 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80224 00:19:18.204 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80224 00:19:18.463 ************************************ 00:19:18.463 END TEST nvmf_digest_clean 00:19:18.463 ************************************ 00:19:18.463 00:19:18.463 real 0m16.595s 00:19:18.463 user 0m32.782s 00:19:18.463 sys 0m4.658s 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:18.463 ************************************ 00:19:18.463 START TEST nvmf_digest_error 00:19:18.463 ************************************ 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80487 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80487 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80487 ']' 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:18.463 10:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:18.463 [2024-11-15 10:42:43.951927] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:18.463 [2024-11-15 10:42:43.952210] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.721 [2024-11-15 10:42:44.098703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.721 [2024-11-15 10:42:44.160782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.721 [2024-11-15 10:42:44.160844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.721 [2024-11-15 10:42:44.160861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.721 [2024-11-15 10:42:44.160870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.721 [2024-11-15 10:42:44.160877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.721 [2024-11-15 10:42:44.161275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:19.658 [2024-11-15 10:42:44.974028] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.658 10:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:19.658 [2024-11-15 10:42:45.036277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:19.658 null0 00:19:19.658 [2024-11-15 10:42:45.091177] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.658 [2024-11-15 10:42:45.115306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80525 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80525 /var/tmp/bperf.sock 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80525 ']' 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:19.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:19.658 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:19.922 [2024-11-15 10:42:45.169672] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:19.922 [2024-11-15 10:42:45.169771] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80525 ] 00:19:19.922 [2024-11-15 10:42:45.317094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.922 [2024-11-15 10:42:45.386196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.200 [2024-11-15 10:42:45.444335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:20.200 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:20.200 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:19:20.200 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:20.200 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:20.458 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:20.458 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.458 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:20.458 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.458 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:20.458 10:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:20.716 nvme0n1 00:19:20.716 10:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:20.716 10:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.716 10:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:20.716 10:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.716 10:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:20.716 10:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:20.974 Running I/O for 2 seconds... 00:19:20.974 [2024-11-15 10:42:46.297692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:20.974 [2024-11-15 10:42:46.297768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.974 [2024-11-15 10:42:46.297787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.974 [2024-11-15 10:42:46.315333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:20.974 [2024-11-15 10:42:46.315380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.974 [2024-11-15 10:42:46.315398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.974 [2024-11-15 10:42:46.333230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:20.974 [2024-11-15 10:42:46.333274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.974 [2024-11-15 10:42:46.333290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.974 [2024-11-15 10:42:46.351637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:20.974 [2024-11-15 10:42:46.351682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.974 [2024-11-15 10:42:46.351699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.974 [2024-11-15 10:42:46.369755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:20.974 [2024-11-15 10:42:46.369796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.974 [2024-11-15 10:42:46.369812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.974 [2024-11-15 10:42:46.387678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:20.974 [2024-11-15 10:42:46.387868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.974 [2024-11-15 10:42:46.387888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.974 [2024-11-15 10:42:46.406176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:20.974 [2024-11-15 10:42:46.406220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.974 [2024-11-15 10:42:46.406235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.974 [2024-11-15 10:42:46.424622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:20.974 [2024-11-15 10:42:46.424665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.974 [2024-11-15 10:42:46.424680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.974 [2024-11-15 10:42:46.442617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:20.974 [2024-11-15 10:42:46.442663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.974 [2024-11-15 10:42:46.442678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.974 [2024-11-15 10:42:46.460632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:20.974 [2024-11-15 10:42:46.460675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.974 [2024-11-15 10:42:46.460690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.478667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.478707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.478722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.496589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.496764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.496783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.514899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.514958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.514974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.532920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.532964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.532980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.550387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.550425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.550440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.568055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.568236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.568256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.585993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.586037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.586052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.603559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.603603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.603619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.621611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.621660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.621676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.639244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.639407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.639426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.656756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.656797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.656812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.674340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.674498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.674535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.692281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.692339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.692354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.710128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.710169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.710185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.233 [2024-11-15 10:42:46.728143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.233 [2024-11-15 10:42:46.728184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.233 [2024-11-15 10:42:46.728203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.491 [2024-11-15 10:42:46.746039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.746085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.746100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.492 [2024-11-15 10:42:46.763965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.764007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.764021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.492 [2024-11-15 10:42:46.781665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.781704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.781718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.492 [2024-11-15 10:42:46.799377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.799567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.799585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.492 [2024-11-15 10:42:46.817232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.817275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.817290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.492 [2024-11-15 10:42:46.835341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.835382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.835397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.492 [2024-11-15 10:42:46.853065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.853238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.853256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.492 [2024-11-15 10:42:46.870891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.871063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.871081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.492 [2024-11-15 10:42:46.888756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.888915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.888933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.492 [2024-11-15 10:42:46.906581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.906623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.906638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.492 [2024-11-15 10:42:46.924371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.924536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.924555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.492 [2024-11-15 10:42:46.942190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.942232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.942248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.492 [2024-11-15 10:42:46.960329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.960372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.960387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.492 [2024-11-15 10:42:46.978493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.492 [2024-11-15 10:42:46.978545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.492 [2024-11-15 10:42:46.978559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.750 [2024-11-15 10:42:46.996783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.750 [2024-11-15 10:42:46.996822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.750 [2024-11-15 10:42:46.996837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.750 [2024-11-15 10:42:47.015271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.750 [2024-11-15 10:42:47.015434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.750 [2024-11-15 10:42:47.015453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.750 [2024-11-15 10:42:47.033921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.750 [2024-11-15 10:42:47.033962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.750 [2024-11-15 10:42:47.033977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.750 [2024-11-15 10:42:47.052501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.750 [2024-11-15 10:42:47.052552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.750 [2024-11-15 10:42:47.052568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.750 [2024-11-15 10:42:47.071068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.750 [2024-11-15 10:42:47.071108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.750 [2024-11-15 10:42:47.071123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.750 [2024-11-15 10:42:47.088912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.750 [2024-11-15 10:42:47.088951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.750 [2024-11-15 10:42:47.088966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.750 [2024-11-15 10:42:47.106867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.750 [2024-11-15 10:42:47.107031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.750 [2024-11-15 10:42:47.107051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.751 [2024-11-15 10:42:47.124762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.751 [2024-11-15 10:42:47.124803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.751 [2024-11-15 10:42:47.124818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.751 [2024-11-15 10:42:47.142355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.751 [2024-11-15 10:42:47.142530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.751 [2024-11-15 10:42:47.142551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.751 [2024-11-15 10:42:47.160174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.751 [2024-11-15 10:42:47.160214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.751 [2024-11-15 10:42:47.160244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.751 [2024-11-15 10:42:47.177807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.751 [2024-11-15 10:42:47.177965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.751 [2024-11-15 10:42:47.177983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.751 [2024-11-15 10:42:47.195725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.751 [2024-11-15 10:42:47.195766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.751 [2024-11-15 10:42:47.195781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.751 [2024-11-15 10:42:47.213375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.751 [2024-11-15 10:42:47.213418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.751 [2024-11-15 10:42:47.213433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.751 [2024-11-15 10:42:47.231280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:21.751 [2024-11-15 10:42:47.231322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.751 [2024-11-15 10:42:47.231337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.009 [2024-11-15 10:42:47.249256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.009 [2024-11-15 10:42:47.249307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.009 [2024-11-15 10:42:47.249321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.009 [2024-11-15 10:42:47.268874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.009 [2024-11-15 10:42:47.269048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.009 [2024-11-15 10:42:47.269067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.009 13916.00 IOPS, 54.36 MiB/s [2024-11-15T10:42:47.507Z] [2024-11-15 10:42:47.286688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.009 [2024-11-15 10:42:47.286845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.009 [2024-11-15 10:42:47.286863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.009 [2024-11-15 10:42:47.304483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.009 [2024-11-15 10:42:47.304537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.009 [2024-11-15 10:42:47.304554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.009 [2024-11-15 10:42:47.322054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.009 [2024-11-15 10:42:47.322211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.009 [2024-11-15 10:42:47.322229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.009 [2024-11-15 10:42:47.339886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.009 [2024-11-15 10:42:47.339927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.009 [2024-11-15 10:42:47.339941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.009 [2024-11-15 10:42:47.357767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.009 [2024-11-15 10:42:47.357807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.009 [2024-11-15 10:42:47.357821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.009 [2024-11-15 10:42:47.375649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.009 [2024-11-15 10:42:47.375689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.009 [2024-11-15 10:42:47.375704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.009 [2024-11-15 10:42:47.393548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.009 [2024-11-15 10:42:47.393589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.009 [2024-11-15 10:42:47.393604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.009 [2024-11-15 10:42:47.411730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.009 [2024-11-15 10:42:47.411896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.010 [2024-11-15 10:42:47.411915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.010 [2024-11-15 10:42:47.437603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.010 [2024-11-15 10:42:47.437669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.010 [2024-11-15 10:42:47.437686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.010 [2024-11-15 10:42:47.455148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.010 [2024-11-15 10:42:47.455190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.010 [2024-11-15 10:42:47.455220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.010 [2024-11-15 10:42:47.472552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.010 [2024-11-15 10:42:47.472716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.010 [2024-11-15 10:42:47.472734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.010 [2024-11-15 10:42:47.490222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.010 [2024-11-15 10:42:47.490265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.010 [2024-11-15 10:42:47.490280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.268 [2024-11-15 10:42:47.507983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.268 [2024-11-15 10:42:47.508027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.268 [2024-11-15 10:42:47.508042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.268 [2024-11-15 10:42:47.526332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.268 [2024-11-15 10:42:47.526375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.268 [2024-11-15 10:42:47.526390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.268 [2024-11-15 10:42:47.544602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.268 [2024-11-15 10:42:47.544648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.268 [2024-11-15 10:42:47.544663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.268 [2024-11-15 10:42:47.562924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.268 [2024-11-15 10:42:47.562967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.268 [2024-11-15 10:42:47.562982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.268 [2024-11-15 10:42:47.581039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.269 [2024-11-15 10:42:47.581081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.269 [2024-11-15 10:42:47.581096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.269 [2024-11-15 10:42:47.599283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.269 [2024-11-15 10:42:47.599327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.269 [2024-11-15 10:42:47.599341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.269 [2024-11-15 10:42:47.617174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.269 [2024-11-15 10:42:47.617362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.269 [2024-11-15 10:42:47.617383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.269 [2024-11-15 10:42:47.634887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.269 [2024-11-15 10:42:47.634930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.269 [2024-11-15 10:42:47.634944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.269 [2024-11-15 10:42:47.652362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.269 [2024-11-15 10:42:47.652540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.269 [2024-11-15 10:42:47.652560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.269 [2024-11-15 10:42:47.670026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.269 [2024-11-15 10:42:47.670068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.269 [2024-11-15 10:42:47.670083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.269 [2024-11-15 10:42:47.687623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.269 [2024-11-15 10:42:47.687662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.269 [2024-11-15 10:42:47.687677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.269 [2024-11-15 10:42:47.705297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.269 [2024-11-15 10:42:47.705337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.269 [2024-11-15 10:42:47.705352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.269 [2024-11-15 10:42:47.723038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.269 [2024-11-15 10:42:47.723234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.269 [2024-11-15 10:42:47.723254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.269 [2024-11-15 10:42:47.741005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.269 [2024-11-15 10:42:47.741052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.269 [2024-11-15 10:42:47.741068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.269 [2024-11-15 10:42:47.758753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.269 [2024-11-15 10:42:47.758794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.269 [2024-11-15 10:42:47.758809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.542 [2024-11-15 10:42:47.776745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.542 [2024-11-15 10:42:47.776780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.542 [2024-11-15 10:42:47.776795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.542 [2024-11-15 10:42:47.794727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.542 [2024-11-15 10:42:47.794769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.542 [2024-11-15 10:42:47.794784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.542 [2024-11-15 10:42:47.812677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.542 [2024-11-15 10:42:47.812718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.542 [2024-11-15 10:42:47.812733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.542 [2024-11-15 10:42:47.830129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.542 [2024-11-15 10:42:47.830291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.542 [2024-11-15 10:42:47.830311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.542 [2024-11-15 10:42:47.848224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.542 [2024-11-15 10:42:47.848267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.542 [2024-11-15 10:42:47.848282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.542 [2024-11-15 10:42:47.866344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.542 [2024-11-15 10:42:47.866385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.542 [2024-11-15 10:42:47.866401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.542 [2024-11-15 10:42:47.884315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.542 [2024-11-15 10:42:47.884481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.542 [2024-11-15 10:42:47.884500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.542 [2024-11-15 10:42:47.901773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.542 [2024-11-15 10:42:47.901815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.542 [2024-11-15 10:42:47.901830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.542 [2024-11-15 10:42:47.919133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.542 [2024-11-15 10:42:47.919293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.542 [2024-11-15 10:42:47.919311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.542 [2024-11-15 10:42:47.936623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.542 [2024-11-15 10:42:47.936665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.542 [2024-11-15 10:42:47.936681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.542 [2024-11-15 10:42:47.953901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.542 [2024-11-15 10:42:47.954059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.542 [2024-11-15 10:42:47.954079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.542 [2024-11-15 10:42:47.971355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.542 [2024-11-15 10:42:47.971400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.542 [2024-11-15 10:42:47.971419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.543 [2024-11-15 10:42:47.988723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.543 [2024-11-15 10:42:47.988765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.543 [2024-11-15 10:42:47.988780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.543 [2024-11-15 10:42:48.005965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.543 [2024-11-15 10:42:48.006007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.543 [2024-11-15 10:42:48.006022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.543 [2024-11-15 10:42:48.023255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.543 [2024-11-15 10:42:48.023297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.543 [2024-11-15 10:42:48.023311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.040563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.040604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.040620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.057809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.057852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.057866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.074992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.075151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.075171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.092431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.092473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.092488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.109699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.109747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.109762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.126979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.127023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.127039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.144283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.144327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.144342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.161627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.161677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.161693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.178910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.178950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.178966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.196203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.196247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.196263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.213441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.213630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.213648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.230926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.230969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.230984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.248167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.248209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.248223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 [2024-11-15 10:42:48.265443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c13f20) 00:19:22.802 [2024-11-15 10:42:48.265637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.802 [2024-11-15 10:42:48.265671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:22.802 14168.50 IOPS, 55.35 MiB/s 00:19:22.802 Latency(us) 00:19:22.802 [2024-11-15T10:42:48.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.802 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:22.802 nvme0n1 : 2.01 14161.69 55.32 0.00 0.00 9031.86 8519.68 34317.03 00:19:22.802 [2024-11-15T10:42:48.300Z] =================================================================================================================== 00:19:22.802 [2024-11-15T10:42:48.300Z] Total : 14161.69 55.32 0.00 0.00 9031.86 8519.68 34317.03 00:19:22.802 { 00:19:22.802 "results": [ 00:19:22.802 { 00:19:22.802 "job": "nvme0n1", 00:19:22.802 "core_mask": "0x2", 00:19:22.802 "workload": "randread", 00:19:22.802 "status": "finished", 00:19:22.802 "queue_depth": 128, 00:19:22.802 "io_size": 4096, 00:19:22.802 "runtime": 2.01, 00:19:22.802 "iops": 14161.691542288558, 00:19:22.802 "mibps": 55.31910758706468, 00:19:22.802 "io_failed": 0, 00:19:22.802 "io_timeout": 0, 00:19:22.802 "avg_latency_us": 9031.862944157898, 00:19:22.802 "min_latency_us": 8519.68, 00:19:22.802 "max_latency_us": 34317.03272727273 00:19:22.802 } 00:19:22.802 ], 00:19:22.802 "core_count": 1 00:19:22.802 } 00:19:23.061 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:23.061 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:23.061 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:23.061 | .driver_specific 00:19:23.061 | .nvme_error 00:19:23.061 | .status_code 00:19:23.061 | .command_transient_transport_error' 00:19:23.061 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:23.319 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 111 > 0 )) 00:19:23.319 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80525 00:19:23.319 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80525 ']' 00:19:23.319 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80525 00:19:23.319 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:19:23.319 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:23.319 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80525 00:19:23.319 killing process with pid 80525 00:19:23.319 Received shutdown signal, test time was about 2.000000 seconds 00:19:23.319 00:19:23.319 Latency(us) 00:19:23.319 [2024-11-15T10:42:48.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.319 [2024-11-15T10:42:48.817Z] =================================================================================================================== 00:19:23.319 [2024-11-15T10:42:48.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.319 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:23.319 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:23.319 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80525' 00:19:23.319 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80525 00:19:23.319 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80525 00:19:23.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80572 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80572 /var/tmp/bperf.sock 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80572 ']' 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:23.577 10:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:23.577 [2024-11-15 10:42:48.913080] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:23.577 [2024-11-15 10:42:48.913489] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80572 ] 00:19:23.577 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:23.577 Zero copy mechanism will not be used. 00:19:23.577 [2024-11-15 10:42:49.069018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.835 [2024-11-15 10:42:49.137780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.835 [2024-11-15 10:42:49.194735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:23.835 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:23.835 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:19:23.835 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:23.835 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:24.094 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:24.094 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.094 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:24.094 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.094 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:24.094 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:24.356 nvme0n1 00:19:24.613 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:24.613 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.614 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:24.614 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.614 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:24.614 10:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:24.614 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:24.614 Zero copy mechanism will not be used. 00:19:24.614 Running I/O for 2 seconds... 00:19:24.614 [2024-11-15 10:42:50.015746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.016063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.016250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.022139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.022202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.022228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.027724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.027782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.027806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.033409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.033475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.033499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.038740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.038790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.038806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.043987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.044027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.044042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.048449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.048490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.048504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.052806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.052845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.052859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.058544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.058583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.058598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.063649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.063688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.063702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.068165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.068203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.068233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.072739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.072778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.072792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.077008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.077046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.077076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.081333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.081372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.081386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.085882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.086041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.086061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.090475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.090541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.090557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.094908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.094947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.094977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.099372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.099412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.099426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.103833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.103872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.103916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.614 [2024-11-15 10:42:50.108275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.614 [2024-11-15 10:42:50.108314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.614 [2024-11-15 10:42:50.108328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.874 [2024-11-15 10:42:50.112599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.874 [2024-11-15 10:42:50.112637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.874 [2024-11-15 10:42:50.112651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.874 [2024-11-15 10:42:50.117053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.874 [2024-11-15 10:42:50.117092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.874 [2024-11-15 10:42:50.117105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.874 [2024-11-15 10:42:50.121507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.874 [2024-11-15 10:42:50.121556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.874 [2024-11-15 10:42:50.121571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.874 [2024-11-15 10:42:50.125784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.874 [2024-11-15 10:42:50.125823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.874 [2024-11-15 10:42:50.125837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.874 [2024-11-15 10:42:50.130088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.874 [2024-11-15 10:42:50.130126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.874 [2024-11-15 10:42:50.130140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.874 [2024-11-15 10:42:50.134353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.874 [2024-11-15 10:42:50.134393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.874 [2024-11-15 10:42:50.134407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.874 [2024-11-15 10:42:50.138689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.874 [2024-11-15 10:42:50.138726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.874 [2024-11-15 10:42:50.138739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.874 [2024-11-15 10:42:50.142912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.874 [2024-11-15 10:42:50.142951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.874 [2024-11-15 10:42:50.142965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.874 [2024-11-15 10:42:50.147219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.874 [2024-11-15 10:42:50.147257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.874 [2024-11-15 10:42:50.147271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.874 [2024-11-15 10:42:50.151547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.151585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.151599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.155895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.156053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.156070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.160300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.160340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.160354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.164565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.164601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.164620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.168950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.168988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.169018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.173270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.173309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.173323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.177456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.177495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.177524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.181670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.181707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.181722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.185935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.185974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.185988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.190182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.190221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.190235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.194531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.194569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.194582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.198730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.198768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.198781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.203090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.203139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.203153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.207421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.207461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.207475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.211777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.211815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.211828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.216088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.216126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.216140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.220420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.220459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.220473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.224815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.224853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.224867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.229095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.229134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.229148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.233547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.233583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.233612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.237838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.237878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.237892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.242032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.242071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.242085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.246335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.246373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.246387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.250644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.250681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.250695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.254883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.254936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.254967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.259366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.259404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.259418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.263788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.263826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.263840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.268115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.268153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.268168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.272546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.272583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.272597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.875 [2024-11-15 10:42:50.276916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.875 [2024-11-15 10:42:50.276954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.875 [2024-11-15 10:42:50.276969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.281167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.281205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.281234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.285417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.285456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.285470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.289589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.289625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.289639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.293755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.293792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.293807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.298035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.298074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.298088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.302501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.302610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.302627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.306930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.306968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.306997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.311351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.311388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.311417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.315735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.315774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.315788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.320052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.320087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.320116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.324318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.324357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.324371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.328665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.328704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.328718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.333048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.333084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.333114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.337534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.337573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.337586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.341836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.341875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.341888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.346075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.346114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.346128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.350315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.350353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.350367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.354548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.354586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.354600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.358773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.358808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.358837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.363041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.363082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.363096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.876 [2024-11-15 10:42:50.367490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:24.876 [2024-11-15 10:42:50.367539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.876 [2024-11-15 10:42:50.367554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.371903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.371939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.371969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.376375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.376411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.376441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.380754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.380791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.380821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.385164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.385200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.385230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.389405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.389443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.389458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.393669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.393707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.393720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.397898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.397937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.397950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.402188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.402225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.402254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.406474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.406527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.406542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.410826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.410863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.410877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.415294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.415331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.415360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.419644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.419681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.419695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.424004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.424042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.424056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.428611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.428649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.428663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.432910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.432947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.432976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.437217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.437271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.437301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.441531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.441581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.441596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.445800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.136 [2024-11-15 10:42:50.445839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.136 [2024-11-15 10:42:50.445853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.136 [2024-11-15 10:42:50.450248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.450286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.450316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.454557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.454594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.454608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.458958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.458994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.459024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.463336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.463375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.463389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.467588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.467626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.467641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.471809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.471847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.471861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.476121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.476160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.476174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.480474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.480523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.480537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.484725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.484762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.484776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.489003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.489041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.489055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.493339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.493379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.493393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.497624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.497674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.497688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.501813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.501852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.501866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.506070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.506109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.506123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.510312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.510350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.510364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.514594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.514631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.514645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.518920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.518959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.518973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.523261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.523301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.523315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.527627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.527664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.527679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.532002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.532040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.532069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.536410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.536450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.536479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.540728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.540766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.540781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.544910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.544946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.544976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.549275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.549311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.549340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.553624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.553670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.137 [2024-11-15 10:42:50.553685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.137 [2024-11-15 10:42:50.558194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.137 [2024-11-15 10:42:50.558232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.558262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.562697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.562733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.562763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.567079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.567117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.567147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.571456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.571494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.571524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.575730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.575781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.575809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.579978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.580014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.580045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.584219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.584256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.584285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.588509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.588558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.588573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.592838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.592886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.592899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.597179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.597217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.597231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.601489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.601535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.601550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.605752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.605790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.605804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.610157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.610195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.610209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.614583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.614620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.614634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.618814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.618853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.618867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.623066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.623104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.623118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.138 [2024-11-15 10:42:50.627279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.138 [2024-11-15 10:42:50.627318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.138 [2024-11-15 10:42:50.627332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.399 [2024-11-15 10:42:50.631551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.399 [2024-11-15 10:42:50.631588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.399 [2024-11-15 10:42:50.631602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.399 [2024-11-15 10:42:50.635843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.399 [2024-11-15 10:42:50.635881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.399 [2024-11-15 10:42:50.635895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.399 [2024-11-15 10:42:50.640184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.399 [2024-11-15 10:42:50.640223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.399 [2024-11-15 10:42:50.640237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.399 [2024-11-15 10:42:50.644375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.399 [2024-11-15 10:42:50.644414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.399 [2024-11-15 10:42:50.644428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.399 [2024-11-15 10:42:50.648823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.399 [2024-11-15 10:42:50.648877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.399 [2024-11-15 10:42:50.648907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.399 [2024-11-15 10:42:50.653095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.399 [2024-11-15 10:42:50.653132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.399 [2024-11-15 10:42:50.653161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.399 [2024-11-15 10:42:50.657485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.399 [2024-11-15 10:42:50.657534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.399 [2024-11-15 10:42:50.657549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.399 [2024-11-15 10:42:50.661808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.399 [2024-11-15 10:42:50.661846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.399 [2024-11-15 10:42:50.661860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.399 [2024-11-15 10:42:50.666079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.399 [2024-11-15 10:42:50.666118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.666131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.670399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.670437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.670467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.674758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.674795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.674809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.679220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.679257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.679286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.683715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.683752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.683766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.688287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.688341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.688371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.692736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.692788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.692801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.697162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.697197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.697238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.701515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.701579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.701595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.705987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.706148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.706167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.710490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.710552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.710567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.714772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.714811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.714825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.719081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.719120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.719134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.723400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.723440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.723454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.727687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.727726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.727740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.732134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.732185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.732198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.736632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.736671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.736685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.740971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.741010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.741023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.745315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.745354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.745369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.749683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.749721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.749734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.754008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.754059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.754088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.758500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.758551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.758566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.762849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.762888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.762902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.767162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.767199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.767228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.771561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.771611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.771626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.775733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.400 [2024-11-15 10:42:50.775769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.400 [2024-11-15 10:42:50.775798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.400 [2024-11-15 10:42:50.780075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.780114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.780127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.784363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.784416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.784430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.788577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.788614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.788628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.792859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.792913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.792959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.797186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.797222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.797251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.801510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.801591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.801606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.805863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.805901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.805915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.810242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.810279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.810309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.814489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.814555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.814586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.818794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.818831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.818861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.823049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.823085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.823114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.827370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.827407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.827436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.831739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.831775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.831804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.835954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.835991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.836020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.840435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.840475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.840489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.844870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.844909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.844923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.849244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.849281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.849311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.853637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.853700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.853714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.857945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.858004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.858033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.862251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.862302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.862330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.866789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.866828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.866841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.871237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.871291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.871304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.875628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.875680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.875709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.879895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.879945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.879973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.884038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.884090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.884119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.888217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.888269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.888297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.401 [2024-11-15 10:42:50.892661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.401 [2024-11-15 10:42:50.892699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.401 [2024-11-15 10:42:50.892712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.661 [2024-11-15 10:42:50.896974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.661 [2024-11-15 10:42:50.897012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.661 [2024-11-15 10:42:50.897025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.661 [2024-11-15 10:42:50.901467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.661 [2024-11-15 10:42:50.901543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.661 [2024-11-15 10:42:50.901558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.661 [2024-11-15 10:42:50.905902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.661 [2024-11-15 10:42:50.905941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.661 [2024-11-15 10:42:50.905955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.661 [2024-11-15 10:42:50.910241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.661 [2024-11-15 10:42:50.910291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.661 [2024-11-15 10:42:50.910319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.661 [2024-11-15 10:42:50.914591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.661 [2024-11-15 10:42:50.914643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.661 [2024-11-15 10:42:50.914671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.661 [2024-11-15 10:42:50.918838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.918890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.918919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.922998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.923050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.923078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.927168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.927219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.927247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.931344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.931395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.931423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.935535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.935585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.935613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.939678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.939729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.939758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.944171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.944209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.944223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.948686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.948739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.948752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.952846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.952914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.952943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.957105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.957158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.957187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.961320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.961372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.961401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.965499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.965577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.965591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.969819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.969856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.969870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.974114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.974152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.974165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.978449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.978502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.978545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.982772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.982825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.982854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.987010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.987063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.987092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.991415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.991468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.991482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:50.995783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:50.995822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:50.995837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:51.000114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:51.000154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:51.000167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:51.004493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:51.004543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:51.004557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:51.008748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:51.008787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:51.008801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.662 7068.00 IOPS, 883.50 MiB/s [2024-11-15T10:42:51.160Z] [2024-11-15 10:42:51.014700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:51.014738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:51.014751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:51.019075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:51.019131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:51.019144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:51.023347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:51.023401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:51.023414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:51.027688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:51.027724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:51.027737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:51.031902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.662 [2024-11-15 10:42:51.031941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.662 [2024-11-15 10:42:51.031955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.662 [2024-11-15 10:42:51.036108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.036162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.036175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.040293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.040349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.040363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.044593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.044630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.044643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.048849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.048889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.048903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.053100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.053138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.053151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.057366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.057404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.057417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.061648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.061696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.061710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.065930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.065967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.065980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.070197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.070235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.070248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.074569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.074608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.074621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.078875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.078915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.078929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.083125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.083168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.083182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.087410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.087449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.087462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.091676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.091715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.091728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.095959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.095998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.096012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.100318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.100357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.100371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.104637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.104673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.104686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.109070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.109125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.109139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.113343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.113381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.113394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.117717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.117754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.117767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.121914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.121951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.121964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.126261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.126297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.126311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.130596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.130633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.130657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.134846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.134884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.134898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.139204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.139244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.139258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.143428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.143482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.143496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.147810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.147847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.663 [2024-11-15 10:42:51.147860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.663 [2024-11-15 10:42:51.152145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.663 [2024-11-15 10:42:51.152183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.664 [2024-11-15 10:42:51.152197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.156517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.156564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.156577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.160892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.160945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.160975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.165260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.165314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.165327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.169794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.169834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.169847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.174169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.174219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.174248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.178543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.178621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.178634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.182991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.183051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.183065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.187472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.187523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.187538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.191829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.191867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.191880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.196294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.196346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.196375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.200576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.200616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.200630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.204825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.204863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.204876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.209044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.209087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.209100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.213429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.213481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.213494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.217784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.217821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.217834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.222135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.222186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.222216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.226532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.924 [2024-11-15 10:42:51.226568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.924 [2024-11-15 10:42:51.226581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.924 [2024-11-15 10:42:51.230800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.230839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.230852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.235051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.235103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.235117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.239429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.239469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.239482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.243646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.243683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.243696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.247936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.247989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.248002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.252263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.252302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.252316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.256562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.256600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.256612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.260917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.260957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.260970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.265369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.265422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.265451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.269797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.269834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.269848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.274158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.274208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.274237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.278490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.278539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.278552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.282865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.282932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.282945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.287249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.287300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.287329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.291740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.291790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.291818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.296230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.296283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.296296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.300642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.300693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.300707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.305168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.305221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.305250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.925 [2024-11-15 10:42:51.309538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.925 [2024-11-15 10:42:51.309587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.925 [2024-11-15 10:42:51.309601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.314147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.314183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.314196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.318435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.318470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.318484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.322678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.322713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.322727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.327095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.327130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.327159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.331426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.331462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.331476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.335770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.335806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.335819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.340148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.340183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.340212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.344455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.344499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.344540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.348815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.348848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.348876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.353102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.353135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.353164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.357435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.357470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.357483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.361562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.361596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.361609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.365793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.365842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.365857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.370204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.370240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.370269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.374571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.374631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.374645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.378816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.378851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.378864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.383101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.383137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.383166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.926 [2024-11-15 10:42:51.387441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.926 [2024-11-15 10:42:51.387477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.926 [2024-11-15 10:42:51.387490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.927 [2024-11-15 10:42:51.391739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.927 [2024-11-15 10:42:51.391774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.927 [2024-11-15 10:42:51.391787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.927 [2024-11-15 10:42:51.396079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.927 [2024-11-15 10:42:51.396113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.927 [2024-11-15 10:42:51.396141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.927 [2024-11-15 10:42:51.400450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.927 [2024-11-15 10:42:51.400486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.927 [2024-11-15 10:42:51.400498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.927 [2024-11-15 10:42:51.404837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.927 [2024-11-15 10:42:51.404870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.927 [2024-11-15 10:42:51.404899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.927 [2024-11-15 10:42:51.409261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.927 [2024-11-15 10:42:51.409294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.927 [2024-11-15 10:42:51.409322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.927 [2024-11-15 10:42:51.413731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.927 [2024-11-15 10:42:51.413769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.927 [2024-11-15 10:42:51.413783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.927 [2024-11-15 10:42:51.418109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:25.927 [2024-11-15 10:42:51.418163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.927 [2024-11-15 10:42:51.418176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.187 [2024-11-15 10:42:51.422586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.187 [2024-11-15 10:42:51.422623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.187 [2024-11-15 10:42:51.422637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.187 [2024-11-15 10:42:51.427016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.187 [2024-11-15 10:42:51.427067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.187 [2024-11-15 10:42:51.427096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.187 [2024-11-15 10:42:51.431405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.187 [2024-11-15 10:42:51.431443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.187 [2024-11-15 10:42:51.431457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.187 [2024-11-15 10:42:51.435754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.187 [2024-11-15 10:42:51.435791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.187 [2024-11-15 10:42:51.435804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.440102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.440166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.440179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.444534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.444598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.444612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.449124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.449179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.449192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.453502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.453548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.453562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.457666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.457702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.457716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.462125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.462192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.462221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.466509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.466570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.466599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.471004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.471055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.471084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.475472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.475526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.475541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.479830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.479915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.479944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.484168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.484219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.484250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.488583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.488659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.488672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.493030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.493081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.493110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.497295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.497346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.497374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.501491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.501553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.501582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.505807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.505844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.505857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.510210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.510262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.510291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.514667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.514736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.514764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.518981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.519030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.519060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.523524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.523574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.523588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.527752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.527789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.527802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.531961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.531999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.532013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.536208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.536246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.536259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.540460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.540500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.540526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.544869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.544909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.544923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.549325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.549364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.188 [2024-11-15 10:42:51.549377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.188 [2024-11-15 10:42:51.553538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.188 [2024-11-15 10:42:51.553576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.553589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.557690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.557727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.557740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.562067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.562133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.562162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.566471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.566531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.566545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.570922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.570960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.570974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.575256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.575295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.575308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.579596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.579633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.579647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.584057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.584108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.584138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.588406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.588443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.588456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.592839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.592891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.592919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.597146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.597198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.597227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.601621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.601684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.601699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.605873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.605910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.605923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.610164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.610200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.610213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.614475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.614536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.614550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.618798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.618850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.618878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.623187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.623238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.623268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.627616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.627653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.627666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.632079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.632132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.632160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.636441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.636494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.636507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.640865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.640904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.640918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.645244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.645298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.645311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.649617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.649663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.649678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.653942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.653980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.653994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.658263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.658313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.658342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.662657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.662709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.662723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.667013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.189 [2024-11-15 10:42:51.667064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.189 [2024-11-15 10:42:51.667094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.189 [2024-11-15 10:42:51.671351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.190 [2024-11-15 10:42:51.671392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.190 [2024-11-15 10:42:51.671405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.190 [2024-11-15 10:42:51.675589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.190 [2024-11-15 10:42:51.675625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.190 [2024-11-15 10:42:51.675638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.190 [2024-11-15 10:42:51.679931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.190 [2024-11-15 10:42:51.679970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.190 [2024-11-15 10:42:51.679984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.684133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.684171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.684184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.688454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.688493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.688506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.692732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.692769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.692782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.697133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.697185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.697214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.701349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.701386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.701400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.705537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.705574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.705587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.709753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.709790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.709804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.714016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.714067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.714095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.718341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.718390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.718435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.722819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.722870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.722899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.727137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.727188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.727217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.731557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.731624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.731637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.735891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.735942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.735971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.740150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.740201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.740230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.744444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.744496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.744522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.748647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.748683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.748696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.752860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.752898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.752911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.757195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.757245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.757274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.761639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.761701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.761715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.766082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.766135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.766179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.770521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.770586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.770600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.775091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.775143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.775171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.779371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.779438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.451 [2024-11-15 10:42:51.779468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.451 [2024-11-15 10:42:51.784080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.451 [2024-11-15 10:42:51.784149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.784179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.788411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.788463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.788493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.792792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.792846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.792875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.797103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.797155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.797184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.801367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.801435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.801464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.805777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.805816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.805829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.810089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.810126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.810139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.814547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.814611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.814642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.819017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.819069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.819098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.823474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.823550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.823565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.827842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.827894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.827923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.832482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.832530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.832544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.836959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.837054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.837083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.841331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.841383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.841427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.845603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.845640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.845664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.849854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.849892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.849905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.854108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.854146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.854159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.858351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.858404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.858418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.862843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.862896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.862909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.867332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.867399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.867413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.871627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.871665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.871678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.875880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.875918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.875932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.880187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.880225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.880238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.884461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.884499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.884524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.888731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.888767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.888780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.893199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.893251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.893279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.452 [2024-11-15 10:42:51.897622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.452 [2024-11-15 10:42:51.897669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.452 [2024-11-15 10:42:51.897683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.453 [2024-11-15 10:42:51.901944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.453 [2024-11-15 10:42:51.902026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.453 [2024-11-15 10:42:51.902055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.453 [2024-11-15 10:42:51.906392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.453 [2024-11-15 10:42:51.906456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.453 [2024-11-15 10:42:51.906470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.453 [2024-11-15 10:42:51.910926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.453 [2024-11-15 10:42:51.910978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.453 [2024-11-15 10:42:51.911006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.453 [2024-11-15 10:42:51.915182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.453 [2024-11-15 10:42:51.915233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.453 [2024-11-15 10:42:51.915262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.453 [2024-11-15 10:42:51.919724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.453 [2024-11-15 10:42:51.919776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.453 [2024-11-15 10:42:51.919790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.453 [2024-11-15 10:42:51.923827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.453 [2024-11-15 10:42:51.923876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.453 [2024-11-15 10:42:51.923905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.453 [2024-11-15 10:42:51.928024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.453 [2024-11-15 10:42:51.928073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.453 [2024-11-15 10:42:51.928101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.453 [2024-11-15 10:42:51.932465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.453 [2024-11-15 10:42:51.932541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.453 [2024-11-15 10:42:51.932555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.453 [2024-11-15 10:42:51.936840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.453 [2024-11-15 10:42:51.936891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.453 [2024-11-15 10:42:51.936920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.453 [2024-11-15 10:42:51.941191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.453 [2024-11-15 10:42:51.941229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.453 [2024-11-15 10:42:51.941242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.712 [2024-11-15 10:42:51.945499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.712 [2024-11-15 10:42:51.945546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.712 [2024-11-15 10:42:51.945559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.712 [2024-11-15 10:42:51.949792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.712 [2024-11-15 10:42:51.949830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.712 [2024-11-15 10:42:51.949844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.712 [2024-11-15 10:42:51.954049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.712 [2024-11-15 10:42:51.954102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:51.954115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.713 [2024-11-15 10:42:51.958312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.713 [2024-11-15 10:42:51.958365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:51.958394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.713 [2024-11-15 10:42:51.962766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.713 [2024-11-15 10:42:51.962817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:51.962847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.713 [2024-11-15 10:42:51.967108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.713 [2024-11-15 10:42:51.967160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:51.967189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.713 [2024-11-15 10:42:51.971453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.713 [2024-11-15 10:42:51.971507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:51.971534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.713 [2024-11-15 10:42:51.975834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.713 [2024-11-15 10:42:51.975886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:51.975899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.713 [2024-11-15 10:42:51.980223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.713 [2024-11-15 10:42:51.980275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:51.980288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.713 [2024-11-15 10:42:51.984622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.713 [2024-11-15 10:42:51.984659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:51.984672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.713 [2024-11-15 10:42:51.988892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.713 [2024-11-15 10:42:51.988930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:51.988944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.713 [2024-11-15 10:42:51.993145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.713 [2024-11-15 10:42:51.993182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:51.993195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.713 [2024-11-15 10:42:51.997579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.713 [2024-11-15 10:42:51.997616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:51.997629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.713 [2024-11-15 10:42:52.001849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.713 [2024-11-15 10:42:52.001887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:52.001900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.713 [2024-11-15 10:42:52.006176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.713 [2024-11-15 10:42:52.006215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:52.006228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.713 [2024-11-15 10:42:52.010420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x58ea90) 00:19:26.713 [2024-11-15 10:42:52.010457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.713 [2024-11-15 10:42:52.010470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.713 7091.00 IOPS, 886.38 MiB/s 00:19:26.713 Latency(us) 00:19:26.713 [2024-11-15T10:42:52.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.713 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:26.713 nvme0n1 : 2.00 7091.07 886.38 0.00 0.00 2252.64 1966.08 7387.69 00:19:26.713 [2024-11-15T10:42:52.211Z] =================================================================================================================== 00:19:26.713 [2024-11-15T10:42:52.211Z] Total : 7091.07 886.38 0.00 0.00 2252.64 1966.08 7387.69 00:19:26.713 { 00:19:26.713 "results": [ 00:19:26.713 { 00:19:26.713 "job": "nvme0n1", 00:19:26.713 "core_mask": "0x2", 00:19:26.713 "workload": "randread", 00:19:26.713 "status": "finished", 00:19:26.713 "queue_depth": 16, 00:19:26.713 "io_size": 131072, 00:19:26.713 "runtime": 2.002236, 00:19:26.713 "iops": 7091.072181301305, 00:19:26.713 "mibps": 886.3840226626631, 00:19:26.713 "io_failed": 0, 00:19:26.713 "io_timeout": 0, 00:19:26.713 "avg_latency_us": 2252.6410674999042, 00:19:26.713 "min_latency_us": 1966.08, 00:19:26.713 "max_latency_us": 7387.694545454546 00:19:26.713 } 00:19:26.713 ], 00:19:26.713 "core_count": 1 00:19:26.713 } 00:19:26.713 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:26.713 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:26.713 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:26.713 | .driver_specific 00:19:26.713 | .nvme_error 00:19:26.713 | .status_code 00:19:26.713 | .command_transient_transport_error' 00:19:26.713 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:26.973 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 458 > 0 )) 00:19:26.973 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80572 00:19:26.973 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80572 ']' 00:19:26.973 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80572 00:19:26.973 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:19:26.973 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:26.973 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80572 00:19:26.973 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:26.973 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:26.973 killing process with pid 80572 00:19:26.973 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80572' 00:19:26.973 Received shutdown signal, test time was about 2.000000 seconds 00:19:26.973 00:19:26.973 Latency(us) 00:19:26.973 [2024-11-15T10:42:52.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.973 [2024-11-15T10:42:52.471Z] =================================================================================================================== 00:19:26.973 [2024-11-15T10:42:52.471Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.973 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80572 00:19:26.973 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80572 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80629 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80629 /var/tmp/bperf.sock 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80629 ']' 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:27.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:27.232 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:27.232 [2024-11-15 10:42:52.597641] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:27.232 [2024-11-15 10:42:52.597744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80629 ] 00:19:27.490 [2024-11-15 10:42:52.741406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.490 [2024-11-15 10:42:52.801048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.490 [2024-11-15 10:42:52.855202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:27.490 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:27.490 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:19:27.490 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:27.490 10:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:27.748 10:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:27.748 10:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.748 10:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:27.748 10:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.748 10:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:27.748 10:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:28.315 nvme0n1 00:19:28.315 10:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:28.315 10:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.315 10:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:28.315 10:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.315 10:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:28.315 10:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:28.315 Running I/O for 2 seconds... 00:19:28.315 [2024-11-15 10:42:53.684201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef7100 00:19:28.315 [2024-11-15 10:42:53.685859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.315 [2024-11-15 10:42:53.685904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:28.315 [2024-11-15 10:42:53.700562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef7970 00:19:28.315 [2024-11-15 10:42:53.702145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.315 [2024-11-15 10:42:53.702186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.315 [2024-11-15 10:42:53.716875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef81e0 00:19:28.315 [2024-11-15 10:42:53.718450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.315 [2024-11-15 10:42:53.718507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:28.315 [2024-11-15 10:42:53.733570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef8a50 00:19:28.315 [2024-11-15 10:42:53.735101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.315 [2024-11-15 10:42:53.735144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:28.315 [2024-11-15 10:42:53.750163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef92c0 00:19:28.315 [2024-11-15 10:42:53.751684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.315 [2024-11-15 10:42:53.751724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:28.315 [2024-11-15 10:42:53.766490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef9b30 00:19:28.315 [2024-11-15 10:42:53.767983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.315 [2024-11-15 10:42:53.768021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:28.315 [2024-11-15 10:42:53.782903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efa3a0 00:19:28.315 [2024-11-15 10:42:53.784362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.315 [2024-11-15 10:42:53.784400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:28.315 [2024-11-15 10:42:53.799075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efac10 00:19:28.315 [2024-11-15 10:42:53.800530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.315 [2024-11-15 10:42:53.800566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:28.573 [2024-11-15 10:42:53.815219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efb480 00:19:28.573 [2024-11-15 10:42:53.816651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.573 [2024-11-15 10:42:53.816684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:28.573 [2024-11-15 10:42:53.831378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efbcf0 00:19:28.573 [2024-11-15 10:42:53.832791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.573 [2024-11-15 10:42:53.832828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:28.573 [2024-11-15 10:42:53.847450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efc560 00:19:28.573 [2024-11-15 10:42:53.848848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.573 [2024-11-15 10:42:53.848884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:28.573 [2024-11-15 10:42:53.863592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efcdd0 00:19:28.573 [2024-11-15 10:42:53.864984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.573 [2024-11-15 10:42:53.865019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:28.573 [2024-11-15 10:42:53.879686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efd640 00:19:28.573 [2024-11-15 10:42:53.881035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.573 [2024-11-15 10:42:53.881070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:28.573 [2024-11-15 10:42:53.895892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efdeb0 00:19:28.573 [2024-11-15 10:42:53.897242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.573 [2024-11-15 10:42:53.897276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:28.573 [2024-11-15 10:42:53.912244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efe720 00:19:28.573 [2024-11-15 10:42:53.913563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.573 [2024-11-15 10:42:53.913599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:28.573 [2024-11-15 10:42:53.928703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eff3c8 00:19:28.573 [2024-11-15 10:42:53.930002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.573 [2024-11-15 10:42:53.930041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:28.573 [2024-11-15 10:42:53.951487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eff3c8 00:19:28.573 [2024-11-15 10:42:53.954019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.573 [2024-11-15 10:42:53.954057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.573 [2024-11-15 10:42:53.967657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efe720 00:19:28.573 [2024-11-15 10:42:53.970165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.574 [2024-11-15 10:42:53.970201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:28.574 [2024-11-15 10:42:53.983861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efdeb0 00:19:28.574 [2024-11-15 10:42:53.986352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.574 [2024-11-15 10:42:53.986388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:28.574 [2024-11-15 10:42:54.000085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efd640 00:19:28.574 [2024-11-15 10:42:54.002598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.574 [2024-11-15 10:42:54.002630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:28.574 [2024-11-15 10:42:54.016265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efcdd0 00:19:28.574 [2024-11-15 10:42:54.018761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.574 [2024-11-15 10:42:54.018798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:28.574 [2024-11-15 10:42:54.032303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efc560 00:19:28.574 [2024-11-15 10:42:54.034771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.574 [2024-11-15 10:42:54.034806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:28.574 [2024-11-15 10:42:54.048260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efbcf0 00:19:28.574 [2024-11-15 10:42:54.050702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.574 [2024-11-15 10:42:54.050738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:28.574 [2024-11-15 10:42:54.064239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efb480 00:19:28.574 [2024-11-15 10:42:54.066662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.574 [2024-11-15 10:42:54.066697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.080399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efac10 00:19:28.833 [2024-11-15 10:42:54.082819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.082856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.096575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016efa3a0 00:19:28.833 [2024-11-15 10:42:54.098925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.098961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.112631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef9b30 00:19:28.833 [2024-11-15 10:42:54.114981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.115018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.128762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef92c0 00:19:28.833 [2024-11-15 10:42:54.131064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.131101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.144894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef8a50 00:19:28.833 [2024-11-15 10:42:54.147188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.147223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.161194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef81e0 00:19:28.833 [2024-11-15 10:42:54.163474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.163519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.177312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef7970 00:19:28.833 [2024-11-15 10:42:54.179579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.179614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.193447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef7100 00:19:28.833 [2024-11-15 10:42:54.195767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.195805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.209689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef6890 00:19:28.833 [2024-11-15 10:42:54.211907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.211961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.226524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef6020 00:19:28.833 [2024-11-15 10:42:54.228729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.228772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.242763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef57b0 00:19:28.833 [2024-11-15 10:42:54.244931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.244970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.259044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef4f40 00:19:28.833 [2024-11-15 10:42:54.261201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.261242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.275189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef46d0 00:19:28.833 [2024-11-15 10:42:54.277312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.277349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.291282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef3e60 00:19:28.833 [2024-11-15 10:42:54.293380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.293414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.307412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef35f0 00:19:28.833 [2024-11-15 10:42:54.309497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.309543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:28.833 [2024-11-15 10:42:54.323544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef2d80 00:19:28.833 [2024-11-15 10:42:54.325614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.833 [2024-11-15 10:42:54.325667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.339663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef2510 00:19:29.092 [2024-11-15 10:42:54.341713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.092 [2024-11-15 10:42:54.341751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.355740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef1ca0 00:19:29.092 [2024-11-15 10:42:54.357766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.092 [2024-11-15 10:42:54.357807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.371802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef1430 00:19:29.092 [2024-11-15 10:42:54.373810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.092 [2024-11-15 10:42:54.373847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.387939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef0bc0 00:19:29.092 [2024-11-15 10:42:54.389945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.092 [2024-11-15 10:42:54.389982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.404090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ef0350 00:19:29.092 [2024-11-15 10:42:54.406080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.092 [2024-11-15 10:42:54.406116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.420304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eefae0 00:19:29.092 [2024-11-15 10:42:54.422319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.092 [2024-11-15 10:42:54.422360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.436500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eef270 00:19:29.092 [2024-11-15 10:42:54.438444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.092 [2024-11-15 10:42:54.438481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.452601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eeea00 00:19:29.092 [2024-11-15 10:42:54.454510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.092 [2024-11-15 10:42:54.454554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.468708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eee190 00:19:29.092 [2024-11-15 10:42:54.470602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.092 [2024-11-15 10:42:54.470639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.484792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eed920 00:19:29.092 [2024-11-15 10:42:54.486669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.092 [2024-11-15 10:42:54.486706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.501082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eed0b0 00:19:29.092 [2024-11-15 10:42:54.502988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.092 [2024-11-15 10:42:54.503035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.517978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eec840 00:19:29.092 [2024-11-15 10:42:54.519853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.092 [2024-11-15 10:42:54.519899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.534821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eebfd0 00:19:29.092 [2024-11-15 10:42:54.536680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.092 [2024-11-15 10:42:54.536730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:29.092 [2024-11-15 10:42:54.551476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eeb760 00:19:29.092 [2024-11-15 10:42:54.553316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.093 [2024-11-15 10:42:54.553362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:29.093 [2024-11-15 10:42:54.567816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eeaef0 00:19:29.093 [2024-11-15 10:42:54.569603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.093 [2024-11-15 10:42:54.569644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:29.093 [2024-11-15 10:42:54.583990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eea680 00:19:29.093 [2024-11-15 10:42:54.585757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.093 [2024-11-15 10:42:54.585797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:29.355 [2024-11-15 10:42:54.600245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ee9e10 00:19:29.355 [2024-11-15 10:42:54.602019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.355 [2024-11-15 10:42:54.602059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:29.355 [2024-11-15 10:42:54.616503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ee95a0 00:19:29.355 [2024-11-15 10:42:54.618245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.355 [2024-11-15 10:42:54.618283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:29.355 [2024-11-15 10:42:54.632735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ee8d30 00:19:29.355 [2024-11-15 10:42:54.634428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.355 [2024-11-15 10:42:54.634466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:29.355 [2024-11-15 10:42:54.649044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ee84c0 00:19:29.356 [2024-11-15 10:42:54.650748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.356 [2024-11-15 10:42:54.650789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:29.356 [2024-11-15 10:42:54.665403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016ee7c50 00:19:29.356 [2024-11-15 10:42:54.667077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.356 [2024-11-15 10:42:54.667116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:29.356 15561.00 IOPS, 60.79 MiB/s [2024-11-15T10:42:54.854Z] [2024-11-15 10:42:54.681772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.356 [2024-11-15 10:42:54.681986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.356 [2024-11-15 10:42:54.682010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.356 [2024-11-15 10:42:54.694744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.356 [2024-11-15 10:42:54.694983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.356 [2024-11-15 10:42:54.695005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.356 [2024-11-15 10:42:54.707663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.356 [2024-11-15 10:42:54.707890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.356 [2024-11-15 10:42:54.707921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.356 [2024-11-15 10:42:54.720582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.356 [2024-11-15 10:42:54.720813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.357 [2024-11-15 10:42:54.720838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.357 [2024-11-15 10:42:54.733399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.357 [2024-11-15 10:42:54.733634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.357 [2024-11-15 10:42:54.733671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.357 [2024-11-15 10:42:54.746343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.357 [2024-11-15 10:42:54.746576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.357 [2024-11-15 10:42:54.746600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.357 [2024-11-15 10:42:54.759216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.357 [2024-11-15 10:42:54.759436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.357 [2024-11-15 10:42:54.759472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.357 [2024-11-15 10:42:54.772102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.357 [2024-11-15 10:42:54.772321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.357 [2024-11-15 10:42:54.772355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.357 [2024-11-15 10:42:54.785060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.358 [2024-11-15 10:42:54.785275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.358 [2024-11-15 10:42:54.785302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.358 [2024-11-15 10:42:54.798030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.358 [2024-11-15 10:42:54.798246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.358 [2024-11-15 10:42:54.798279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.358 [2024-11-15 10:42:54.810995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.358 [2024-11-15 10:42:54.811212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.358 [2024-11-15 10:42:54.811252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.358 [2024-11-15 10:42:54.823904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.358 [2024-11-15 10:42:54.824125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.358 [2024-11-15 10:42:54.824161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.358 [2024-11-15 10:42:54.836794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.358 [2024-11-15 10:42:54.837017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.358 [2024-11-15 10:42:54.837050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.358 [2024-11-15 10:42:54.849792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.359 [2024-11-15 10:42:54.850037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.359 [2024-11-15 10:42:54.850082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.628 [2024-11-15 10:42:54.862714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.628 [2024-11-15 10:42:54.862941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.628 [2024-11-15 10:42:54.862972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.628 [2024-11-15 10:42:54.875641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.628 [2024-11-15 10:42:54.875857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.628 [2024-11-15 10:42:54.875888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.628 [2024-11-15 10:42:54.888534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.628 [2024-11-15 10:42:54.888755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.628 [2024-11-15 10:42:54.888776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.628 [2024-11-15 10:42:54.901419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.628 [2024-11-15 10:42:54.901663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.628 [2024-11-15 10:42:54.901690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.628 [2024-11-15 10:42:54.914404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.628 [2024-11-15 10:42:54.914635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.628 [2024-11-15 10:42:54.914659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.628 [2024-11-15 10:42:54.927374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.628 [2024-11-15 10:42:54.927607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.628 [2024-11-15 10:42:54.927638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.628 [2024-11-15 10:42:54.940342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.628 [2024-11-15 10:42:54.940578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.628 [2024-11-15 10:42:54.940609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.628 [2024-11-15 10:42:54.953247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.628 [2024-11-15 10:42:54.953459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.628 [2024-11-15 10:42:54.953489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.628 [2024-11-15 10:42:54.966208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.628 [2024-11-15 10:42:54.966426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.628 [2024-11-15 10:42:54.966453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.628 [2024-11-15 10:42:54.979169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.628 [2024-11-15 10:42:54.979390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.628 [2024-11-15 10:42:54.979412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.628 [2024-11-15 10:42:54.992039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.629 [2024-11-15 10:42:54.992254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.629 [2024-11-15 10:42:54.992281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-15 10:42:55.004964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.629 [2024-11-15 10:42:55.005195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.629 [2024-11-15 10:42:55.005217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-15 10:42:55.017861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.629 [2024-11-15 10:42:55.018086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.629 [2024-11-15 10:42:55.018121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-15 10:42:55.030761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.629 [2024-11-15 10:42:55.030978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.629 [2024-11-15 10:42:55.031010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-15 10:42:55.043646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.629 [2024-11-15 10:42:55.043868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.629 [2024-11-15 10:42:55.043889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-15 10:42:55.056478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.629 [2024-11-15 10:42:55.056734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.629 [2024-11-15 10:42:55.056784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-15 10:42:55.069432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.629 [2024-11-15 10:42:55.069681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.629 [2024-11-15 10:42:55.069705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-15 10:42:55.082326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.629 [2024-11-15 10:42:55.082559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.629 [2024-11-15 10:42:55.082581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-15 10:42:55.095252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.629 [2024-11-15 10:42:55.095470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.629 [2024-11-15 10:42:55.095492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-15 10:42:55.108146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.629 [2024-11-15 10:42:55.108358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.629 [2024-11-15 10:42:55.108379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.629 [2024-11-15 10:42:55.121099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.629 [2024-11-15 10:42:55.121316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.629 [2024-11-15 10:42:55.121337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.887 [2024-11-15 10:42:55.134079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.134293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.134313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.146999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.147219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.147246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.159996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.160223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.160248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.172905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.173128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.173150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.185805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.186027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.186070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.198781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.198997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.199022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.211650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.211879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.211907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.224611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.224828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.224850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.237430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.237666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.237688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.250365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.250602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.250627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.263258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.263474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.263497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.276106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.276317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.276339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.289042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.289269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.289296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.301986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.302212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.302238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.314917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.315131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.315156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.327825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.328050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.328073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.340693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.340920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.340947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.353681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.353918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.353946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.366589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.366829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.366876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.888 [2024-11-15 10:42:55.379530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:29.888 [2024-11-15 10:42:55.379747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.888 [2024-11-15 10:42:55.379779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.146 [2024-11-15 10:42:55.392442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.146 [2024-11-15 10:42:55.392672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.146 [2024-11-15 10:42:55.392696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.146 [2024-11-15 10:42:55.405436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.146 [2024-11-15 10:42:55.405692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.146 [2024-11-15 10:42:55.405719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.146 [2024-11-15 10:42:55.418410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.146 [2024-11-15 10:42:55.418644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.146 [2024-11-15 10:42:55.418681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.146 [2024-11-15 10:42:55.431285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.146 [2024-11-15 10:42:55.431499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.431534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.444239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.444456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.444484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.457111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.457325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.457360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.469957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.470175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.470209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.482867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.483091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.483123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.495737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.495961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.495984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.508652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.508866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.508888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.521495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.521748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.521769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.534496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.534744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.534767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.547487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.547720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.547751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.560424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.560664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.560692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.573451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.573705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.573743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.586425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.586653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.586675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.599348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.599583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.599604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.612273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.612485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.612529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.625215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.625434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.625467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.147 [2024-11-15 10:42:55.638150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.147 [2024-11-15 10:42:55.638376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.147 [2024-11-15 10:42:55.638425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.406 [2024-11-15 10:42:55.651013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.406 [2024-11-15 10:42:55.651236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.406 [2024-11-15 10:42:55.651268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.406 [2024-11-15 10:42:55.663893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10085b0) with pdu=0x200016eddc00 00:19:30.406 [2024-11-15 10:42:55.664110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.406 [2024-11-15 10:42:55.664142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.406 17639.00 IOPS, 68.90 MiB/s 00:19:30.406 Latency(us) 00:19:30.406 [2024-11-15T10:42:55.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.406 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:30.406 nvme0n1 : 2.01 17646.43 68.93 0.00 0.00 7239.44 2398.02 24665.37 00:19:30.406 [2024-11-15T10:42:55.904Z] =================================================================================================================== 00:19:30.406 [2024-11-15T10:42:55.904Z] Total : 17646.43 68.93 0.00 0.00 7239.44 2398.02 24665.37 00:19:30.406 { 00:19:30.406 "results": [ 00:19:30.406 { 00:19:30.406 "job": "nvme0n1", 00:19:30.406 "core_mask": "0x2", 00:19:30.406 "workload": "randwrite", 00:19:30.406 "status": "finished", 00:19:30.406 "queue_depth": 128, 00:19:30.406 "io_size": 4096, 00:19:30.406 "runtime": 2.006411, 00:19:30.406 "iops": 17646.434354676087, 00:19:30.406 "mibps": 68.93138419795346, 00:19:30.406 "io_failed": 0, 00:19:30.406 "io_timeout": 0, 00:19:30.406 "avg_latency_us": 7239.441738174834, 00:19:30.406 "min_latency_us": 2398.021818181818, 00:19:30.406 "max_latency_us": 24665.36727272727 00:19:30.406 } 00:19:30.406 ], 00:19:30.406 "core_count": 1 00:19:30.406 } 00:19:30.406 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:30.406 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:30.406 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:30.406 10:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:30.406 | .driver_specific 00:19:30.406 | .nvme_error 00:19:30.406 | .status_code 00:19:30.406 | .command_transient_transport_error' 00:19:30.664 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:19:30.664 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80629 00:19:30.664 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80629 ']' 00:19:30.664 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80629 00:19:30.664 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:19:30.664 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.664 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80629 00:19:30.664 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:30.664 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:30.664 killing process with pid 80629 00:19:30.664 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80629' 00:19:30.664 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80629 00:19:30.664 Received shutdown signal, test time was about 2.000000 seconds 00:19:30.664 00:19:30.664 Latency(us) 00:19:30.664 [2024-11-15T10:42:56.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.664 [2024-11-15T10:42:56.162Z] =================================================================================================================== 00:19:30.664 [2024-11-15T10:42:56.162Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.664 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80629 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80682 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80682 /var/tmp/bperf.sock 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80682 ']' 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:30.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:30.923 10:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:30.923 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:30.923 Zero copy mechanism will not be used. 00:19:30.923 [2024-11-15 10:42:56.296252] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:30.923 [2024-11-15 10:42:56.296355] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80682 ] 00:19:31.182 [2024-11-15 10:42:56.438790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.182 [2024-11-15 10:42:56.502638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.182 [2024-11-15 10:42:56.557114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.116 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.116 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:19:32.116 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:32.116 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:32.116 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:32.116 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.116 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:32.116 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.116 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:32.116 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:32.400 nvme0n1 00:19:32.400 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:32.400 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.400 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:32.400 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.400 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:32.400 10:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:32.664 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:32.664 Zero copy mechanism will not be used. 00:19:32.664 Running I/O for 2 seconds... 00:19:32.664 [2024-11-15 10:42:57.977775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:57.977911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:57.977943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:57.983441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:57.983526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:57.983551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:57.988953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:57.989056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:57.989079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:57.994331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:57.994451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:57.994474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:57.999982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.000058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:58.000081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:58.005543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.005698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:58.005722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:58.010588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.010679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:58.010700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:58.015842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.015979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:58.016015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:58.021073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.021160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:58.021181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:58.026295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.026384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:58.026406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:58.031451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.031565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:58.031588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:58.036511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.036816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:58.036838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:58.042098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.042210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:58.042232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:58.047141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.047225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:58.047247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:58.052190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.052451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:58.052474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:58.057684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.057764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:58.057788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:58.062952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.063038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.664 [2024-11-15 10:42:58.063060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.664 [2024-11-15 10:42:58.068347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.664 [2024-11-15 10:42:58.068585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.068622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.073642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.073744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.073767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.078911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.079000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.079023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.084073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.084275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.084298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.089375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.089486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.089509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.094706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.094799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.094822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.099837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.099915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.099938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.104933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.105019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.105043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.110156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.110259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.110282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.115363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.115460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.115482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.120720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.120854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.120876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.125798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.125896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.125920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.130869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.130971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.130992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.136062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.136297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.136320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.141353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.141459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.141482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.146664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.146804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.146827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.151908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.151996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.152019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.665 [2024-11-15 10:42:58.157310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.665 [2024-11-15 10:42:58.157419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.665 [2024-11-15 10:42:58.157442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.925 [2024-11-15 10:42:58.162635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.925 [2024-11-15 10:42:58.162724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.925 [2024-11-15 10:42:58.162762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.925 [2024-11-15 10:42:58.167860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.925 [2024-11-15 10:42:58.167957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.925 [2024-11-15 10:42:58.167978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.925 [2024-11-15 10:42:58.172849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.925 [2024-11-15 10:42:58.172952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.925 [2024-11-15 10:42:58.172974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.925 [2024-11-15 10:42:58.178305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.925 [2024-11-15 10:42:58.178389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.925 [2024-11-15 10:42:58.178410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.925 [2024-11-15 10:42:58.183558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.925 [2024-11-15 10:42:58.183668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.925 [2024-11-15 10:42:58.183690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.925 [2024-11-15 10:42:58.188586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.925 [2024-11-15 10:42:58.188698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.925 [2024-11-15 10:42:58.188721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.925 [2024-11-15 10:42:58.193589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.925 [2024-11-15 10:42:58.193692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.925 [2024-11-15 10:42:58.193716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.925 [2024-11-15 10:42:58.198528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.925 [2024-11-15 10:42:58.198659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.925 [2024-11-15 10:42:58.198682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.925 [2024-11-15 10:42:58.203506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.925 [2024-11-15 10:42:58.203619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.925 [2024-11-15 10:42:58.203641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.925 [2024-11-15 10:42:58.208419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.925 [2024-11-15 10:42:58.208505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.925 [2024-11-15 10:42:58.208539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.925 [2024-11-15 10:42:58.213290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.925 [2024-11-15 10:42:58.213377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.925 [2024-11-15 10:42:58.213415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.925 [2024-11-15 10:42:58.218704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.218811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.218850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.224178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.224264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.224286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.229400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.229489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.229512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.234608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.234706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.234729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.239833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.239918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.239941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.244959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.245035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.245058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.250719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.250805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.250828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.255857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.255932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.255955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.261348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.261435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.261458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.266738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.266827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.266848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.271876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.271987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.272008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.276963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.277051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.277073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.282090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.282340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.282362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.287466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.287665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.287701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.292707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.292815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.292837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.297683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.297759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.297782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.302948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.303034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.303072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.308366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.308470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.308492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.313415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.313701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.313725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.318749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.318848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.318870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.323742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.323831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.323853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.328795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.328892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.328913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.333847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.333958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.333982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.338936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.339024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.339046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.344077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.344165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.344186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.349638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.349776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.349799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.355003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.355089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.926 [2024-11-15 10:42:58.355111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.926 [2024-11-15 10:42:58.360120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.926 [2024-11-15 10:42:58.360207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.927 [2024-11-15 10:42:58.360228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.927 [2024-11-15 10:42:58.365242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.927 [2024-11-15 10:42:58.365497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.927 [2024-11-15 10:42:58.365520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.927 [2024-11-15 10:42:58.370762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.927 [2024-11-15 10:42:58.370867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.927 [2024-11-15 10:42:58.370904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.927 [2024-11-15 10:42:58.376005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.927 [2024-11-15 10:42:58.376092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.927 [2024-11-15 10:42:58.376114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.927 [2024-11-15 10:42:58.381297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.927 [2024-11-15 10:42:58.381530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.927 [2024-11-15 10:42:58.381553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.927 [2024-11-15 10:42:58.386615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.927 [2024-11-15 10:42:58.386720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.927 [2024-11-15 10:42:58.386743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.927 [2024-11-15 10:42:58.391691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.927 [2024-11-15 10:42:58.391767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.927 [2024-11-15 10:42:58.391791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.927 [2024-11-15 10:42:58.396822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.927 [2024-11-15 10:42:58.396898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.927 [2024-11-15 10:42:58.396921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.927 [2024-11-15 10:42:58.401959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.927 [2024-11-15 10:42:58.402045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.927 [2024-11-15 10:42:58.402068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.927 [2024-11-15 10:42:58.407110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.927 [2024-11-15 10:42:58.407198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.927 [2024-11-15 10:42:58.407221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.927 [2024-11-15 10:42:58.412297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:32.927 [2024-11-15 10:42:58.412387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.927 [2024-11-15 10:42:58.412410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.186 [2024-11-15 10:42:58.417421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.186 [2024-11-15 10:42:58.417674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.186 [2024-11-15 10:42:58.417697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.186 [2024-11-15 10:42:58.422767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.186 [2024-11-15 10:42:58.422852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.186 [2024-11-15 10:42:58.422875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.186 [2024-11-15 10:42:58.427965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.186 [2024-11-15 10:42:58.428040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.186 [2024-11-15 10:42:58.428062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.186 [2024-11-15 10:42:58.433186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.186 [2024-11-15 10:42:58.433414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.186 [2024-11-15 10:42:58.433437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.186 [2024-11-15 10:42:58.438652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.186 [2024-11-15 10:42:58.438752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.186 [2024-11-15 10:42:58.438775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.186 [2024-11-15 10:42:58.443909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.186 [2024-11-15 10:42:58.443999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.186 [2024-11-15 10:42:58.444021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.186 [2024-11-15 10:42:58.449125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.186 [2024-11-15 10:42:58.449371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.186 [2024-11-15 10:42:58.449394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.186 [2024-11-15 10:42:58.454519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.186 [2024-11-15 10:42:58.454623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.186 [2024-11-15 10:42:58.454647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.186 [2024-11-15 10:42:58.459761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.186 [2024-11-15 10:42:58.459898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.186 [2024-11-15 10:42:58.459920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.186 [2024-11-15 10:42:58.464920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.465146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.465168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.470454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.470748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.470967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.475936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.476186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.476367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.481310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.481565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.481770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.486711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.486951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.487174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.492031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.492297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.492583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.497320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.497582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.497841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.502716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.502936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.502967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.508248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.508340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.508363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.513491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.513623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.513647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.518735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.518826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.518849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.523957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.524069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.524091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.529230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.529329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.529352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.534586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.534698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.534721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.539735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.539811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.539834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.545005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.545093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.545116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.550279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.550583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.550607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.555634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.555720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.555746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.560858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.560970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.560994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.566162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.566373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.566397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.571753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.571990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.572208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.577051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.577284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.577467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.582424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.582672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.582860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.587686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.587916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.588157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.593008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.593243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.593409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.598331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.598566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.598795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.603718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.603989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.187 [2024-11-15 10:42:58.604181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.187 [2024-11-15 10:42:58.609057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.187 [2024-11-15 10:42:58.609313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.609505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.188 [2024-11-15 10:42:58.614485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.188 [2024-11-15 10:42:58.614581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.614607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.188 [2024-11-15 10:42:58.619737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.188 [2024-11-15 10:42:58.619816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.619855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.188 [2024-11-15 10:42:58.624916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.188 [2024-11-15 10:42:58.624992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.625016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.188 [2024-11-15 10:42:58.630035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.188 [2024-11-15 10:42:58.630267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.630291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.188 [2024-11-15 10:42:58.635409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.188 [2024-11-15 10:42:58.635494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.635518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.188 [2024-11-15 10:42:58.640533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.188 [2024-11-15 10:42:58.640628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.640651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.188 [2024-11-15 10:42:58.645843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.188 [2024-11-15 10:42:58.645951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.645974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.188 [2024-11-15 10:42:58.650954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.188 [2024-11-15 10:42:58.651049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.651072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.188 [2024-11-15 10:42:58.656136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.188 [2024-11-15 10:42:58.656212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.656235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.188 [2024-11-15 10:42:58.661265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.188 [2024-11-15 10:42:58.661339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.661362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.188 [2024-11-15 10:42:58.666373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.188 [2024-11-15 10:42:58.666614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.666638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.188 [2024-11-15 10:42:58.671697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.188 [2024-11-15 10:42:58.671791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.671814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.188 [2024-11-15 10:42:58.676841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.188 [2024-11-15 10:42:58.676915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.188 [2024-11-15 10:42:58.676938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.682031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.682226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.682250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.687432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.687590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.687805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.692714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.692943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.693116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.698062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.698373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.698601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.703596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.703848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.704038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.709120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.709212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.709237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.714717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.714792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.714815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.720153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.720273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.720296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.725567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.725667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.725691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.730944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.731029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.731052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.736328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.736424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.736448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.741701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.741787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.741811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.747132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.747214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.747236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.752616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.752722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.752746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.758127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.758213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.758236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.763695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.763779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.763802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.769213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.769312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.769334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.774652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.774742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.774765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.780139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.780358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.780381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.785908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.785983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.448 [2024-11-15 10:42:58.786005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.448 [2024-11-15 10:42:58.791415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.448 [2024-11-15 10:42:58.791501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.791523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.796942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.797049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.797072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.802538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.802645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.802668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.808056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.808349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.808372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.813772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.813865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.813888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.819215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.819324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.819347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.824850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.824925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.824948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.830255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.830336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.830358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.835687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.835808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.835831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.841041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.841147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.841170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.846528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.846632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.846655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.852018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.852237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.852260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.857807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.857906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.857930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.863507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.863598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.863622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.868978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.869083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.869106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.874390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.874466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.874489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.879874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.880110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.880133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.885683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.885927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.886208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.891273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.891535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.891745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.896827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.897051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.897281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.902390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.902643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.902823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.907928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.908185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.908398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.913428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.913721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.913911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.919044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.919304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.919508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.924615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.924699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.924724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.930121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.930221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.930245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.935546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.935643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.449 [2024-11-15 10:42:58.935666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.449 [2024-11-15 10:42:58.940932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.449 [2024-11-15 10:42:58.941015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.450 [2024-11-15 10:42:58.941038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.709 [2024-11-15 10:42:58.946358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:58.946443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:58.946466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:58.951760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:58.951857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:58.951880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:58.957186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:58.957295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:58.957319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:58.962652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:58.962737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:58.962762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:58.968143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:58.968226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:58.968249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:58.973536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:58.973622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:58.973645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.710 5801.00 IOPS, 725.12 MiB/s [2024-11-15T10:42:59.208Z] [2024-11-15 10:42:58.979879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:58.979980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:58.980004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:58.985358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:58.985456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:58.985479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:58.990825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:58.990936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:58.990960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:58.996247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:58.996375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:58.996398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.001670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.001746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.001769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.007103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.007185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.007208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.012559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.012656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.012679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.018010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.018086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.018110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.023412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.023552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.023580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.028839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.028950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.028979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.034183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.034264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.034288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.039594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.039691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.039714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.044984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.045088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.045111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.050369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.050469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.050492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.055731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.055806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.055829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.061099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.061187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.061209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.066486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.066606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.066631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.071787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.071873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.071896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.077087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.077169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.077192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.082366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.082442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.082466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.710 [2024-11-15 10:42:59.087673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.710 [2024-11-15 10:42:59.087780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.710 [2024-11-15 10:42:59.087804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.092978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.093063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.093086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.098319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.098394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.098417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.103603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.103690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.103713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.108907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.108983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.109006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.114152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.114227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.114250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.119425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.119682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.119705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.124863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.124943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.124966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.130069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.130144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.130167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.135318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.135533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.135557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.140732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.140814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.140838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.146049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.146133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.146166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.151240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.151466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.151489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.156693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.156792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.156815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.161974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.162080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.162103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.167211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.167411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.167435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.172586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.172680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.172703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.178013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.178099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.178122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.183419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.183645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.183668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.188855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.188941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.188965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.194267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.194371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.194393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.711 [2024-11-15 10:42:59.199504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.711 [2024-11-15 10:42:59.199593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.711 [2024-11-15 10:42:59.199616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.971 [2024-11-15 10:42:59.204696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.971 [2024-11-15 10:42:59.204800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.204823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.209993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.210089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.210112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.215202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.215431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.215454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.220655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.220737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.220760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.225862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.225964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.225986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.231087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.231318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.231340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.236477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.236564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.236588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.241674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.241751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.241774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.246913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.247112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.247134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.252248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.252337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.252360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.257505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.257613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.257637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.262737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.262814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.262836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.268040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.268114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.268137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.273344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.273429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.273453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.278596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.278672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.278695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.283797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.283872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.283895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.289066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.289163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.289185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.294268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.294363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.294386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.299474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.299588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.299611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.304683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.304771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.304794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.309999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.310080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.310103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.315250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.315347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.315370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.320433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.320545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.320569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.325599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.325695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.325718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.330860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.330944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.330968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.336090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.336176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.336200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.341330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.972 [2024-11-15 10:42:59.341431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.972 [2024-11-15 10:42:59.341454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.972 [2024-11-15 10:42:59.346605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.346711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.346734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.351806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.351882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.351905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.357046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.357121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.357143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.362242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.362321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.362344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.367451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.367539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.367562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.372673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.372780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.372803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.377928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.378004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.378029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.383153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.383229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.383252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.388355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.388439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.388462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.393584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.393671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.393695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.398850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.398933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.398957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.404078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.404154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.404177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.409353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.409447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.409469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.414576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.414661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.414684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.419760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.419836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.419858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.425008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.425093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.425115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.430266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.430347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.430369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.435493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.435608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.435637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.440694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.440770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.440792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.445930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.446005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.446027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.451238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.451312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.451335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.456464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.456554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.456577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.973 [2024-11-15 10:42:59.461649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:33.973 [2024-11-15 10:42:59.461749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.973 [2024-11-15 10:42:59.461775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.233 [2024-11-15 10:42:59.466868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.233 [2024-11-15 10:42:59.466952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.233 [2024-11-15 10:42:59.466975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.233 [2024-11-15 10:42:59.472080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.233 [2024-11-15 10:42:59.472168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.233 [2024-11-15 10:42:59.472191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.233 [2024-11-15 10:42:59.477249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.233 [2024-11-15 10:42:59.477334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.233 [2024-11-15 10:42:59.477356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.233 [2024-11-15 10:42:59.482585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.233 [2024-11-15 10:42:59.482665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.233 [2024-11-15 10:42:59.482688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.233 [2024-11-15 10:42:59.487782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.233 [2024-11-15 10:42:59.487856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.233 [2024-11-15 10:42:59.487879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.233 [2024-11-15 10:42:59.492985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.233 [2024-11-15 10:42:59.493070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.233 [2024-11-15 10:42:59.493098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.233 [2024-11-15 10:42:59.498269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.233 [2024-11-15 10:42:59.498343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.233 [2024-11-15 10:42:59.498366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.233 [2024-11-15 10:42:59.503496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.233 [2024-11-15 10:42:59.503608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.233 [2024-11-15 10:42:59.503631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.233 [2024-11-15 10:42:59.508733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.233 [2024-11-15 10:42:59.508820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.233 [2024-11-15 10:42:59.508843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.233 [2024-11-15 10:42:59.513891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.233 [2024-11-15 10:42:59.513983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.233 [2024-11-15 10:42:59.514005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.233 [2024-11-15 10:42:59.519185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.233 [2024-11-15 10:42:59.519259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.233 [2024-11-15 10:42:59.519281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.233 [2024-11-15 10:42:59.524379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.233 [2024-11-15 10:42:59.524482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.233 [2024-11-15 10:42:59.524504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.233 [2024-11-15 10:42:59.529698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.233 [2024-11-15 10:42:59.529780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.529803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.534892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.534970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.534993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.540142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.540218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.540241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.545397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.545483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.545505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.550638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.550723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.550746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.555844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.555932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.555954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.561077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.561157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.561179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.566353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.566447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.566470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.571587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.571662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.571685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.576817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.576899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.576921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.582090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.582190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.582213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.587280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.587356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.587378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.592571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.592656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.592679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.597802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.597888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.597911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.603072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.603169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.603192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.608256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.608347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.608370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.613567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.613638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.613672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.618734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.618844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.618867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.623950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.624066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.624089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.629281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.629383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.629405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.634584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.634659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.634681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.639829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.639918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.234 [2024-11-15 10:42:59.639941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.234 [2024-11-15 10:42:59.645010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.234 [2024-11-15 10:42:59.645084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.645106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.650274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.650361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.650385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.655539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.655624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.655647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.660769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.660846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.660870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.665963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.666049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.666071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.671206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.671281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.671304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.676412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.676486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.676523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.681627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.681721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.681744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.686889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.686964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.686987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.692083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.692168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.692191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.697266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.697370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.697393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.702471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.702561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.702584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.707717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.707796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.707820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.712918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.713020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.713043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.718166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.718242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.718265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.235 [2024-11-15 10:42:59.723434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.235 [2024-11-15 10:42:59.723550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.235 [2024-11-15 10:42:59.723574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.728668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.728742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.728765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.733890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.733979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.734001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.739081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.739162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.739186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.744362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.744436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.744458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.749637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.749718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.749742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.754949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.755034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.755057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.760173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.760248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.760271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.765414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.765537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.765561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.770762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.770839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.770863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.776061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.776136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.776159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.781378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.781460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.781483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.786689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.786767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.786791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.791926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.792002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.792025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.797117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.797213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.797237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.802441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.802532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.802556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.807684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.807772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.807795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.812930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.813007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.813031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.818172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.818268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.818291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.823400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.823475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.823498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.828644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.828719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.828742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.833870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.833956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.496 [2024-11-15 10:42:59.833978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.496 [2024-11-15 10:42:59.839098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.496 [2024-11-15 10:42:59.839185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.839207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.844320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.844395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.844417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.849525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.849610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.849633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.854817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.854902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.854925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.859994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.860069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.860092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.865234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.865310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.865334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.870439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.870529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.870552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.875689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.875775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.875798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.880925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.881010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.881033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.886114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.886204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.886227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.891312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.891388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.891412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.896582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.896662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.896686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.901791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.901878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.901901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.907074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.907149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.907172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.912379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.912453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.912477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.917588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.917689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.917712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.922854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.922952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.922974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.928075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.928151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.928175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.933337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.933423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.933445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.938608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.938693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.938728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.943906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.943984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.944007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.949153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.949253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.949276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.954427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.954525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.954549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.959684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.959781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.959805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.964916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.965005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.965028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.497 [2024-11-15 10:42:59.970191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.970268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.970291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.497 5841.50 IOPS, 730.19 MiB/s [2024-11-15T10:42:59.995Z] [2024-11-15 10:42:59.976361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1008750) with pdu=0x200016eff3c8 00:19:34.497 [2024-11-15 10:42:59.976474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.497 [2024-11-15 10:42:59.976496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.497 00:19:34.497 Latency(us) 00:19:34.497 [2024-11-15T10:42:59.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.497 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:34.498 nvme0n1 : 2.00 5838.47 729.81 0.00 0.00 2734.27 1824.58 6732.33 00:19:34.498 [2024-11-15T10:42:59.996Z] =================================================================================================================== 00:19:34.498 [2024-11-15T10:42:59.996Z] Total : 5838.47 729.81 0.00 0.00 2734.27 1824.58 6732.33 00:19:34.498 { 00:19:34.498 "results": [ 00:19:34.498 { 00:19:34.498 "job": "nvme0n1", 00:19:34.498 "core_mask": "0x2", 00:19:34.498 "workload": "randwrite", 00:19:34.498 "status": "finished", 00:19:34.498 "queue_depth": 16, 00:19:34.498 "io_size": 131072, 00:19:34.498 "runtime": 2.00378, 00:19:34.498 "iops": 5838.4653005819, 00:19:34.498 "mibps": 729.8081625727375, 00:19:34.498 "io_failed": 0, 00:19:34.498 "io_timeout": 0, 00:19:34.498 "avg_latency_us": 2734.269141884699, 00:19:34.498 "min_latency_us": 1824.581818181818, 00:19:34.498 "max_latency_us": 6732.334545454545 00:19:34.498 } 00:19:34.498 ], 00:19:34.498 "core_count": 1 00:19:34.498 } 00:19:34.756 10:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:34.756 10:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:34.756 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:34.756 | .driver_specific 00:19:34.756 | .nvme_error 00:19:34.756 | .status_code 00:19:34.756 | .command_transient_transport_error' 00:19:34.756 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:35.015 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 378 > 0 )) 00:19:35.015 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80682 00:19:35.015 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80682 ']' 00:19:35.015 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80682 00:19:35.015 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:19:35.015 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:35.015 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80682 00:19:35.015 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:35.015 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:35.015 killing process with pid 80682 00:19:35.015 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80682' 00:19:35.015 Received shutdown signal, test time was about 2.000000 seconds 00:19:35.015 00:19:35.015 Latency(us) 00:19:35.015 [2024-11-15T10:43:00.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.015 [2024-11-15T10:43:00.513Z] =================================================================================================================== 00:19:35.015 [2024-11-15T10:43:00.513Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.015 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80682 00:19:35.015 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80682 00:19:35.274 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80487 00:19:35.274 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80487 ']' 00:19:35.274 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80487 00:19:35.274 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:19:35.274 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:35.274 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80487 00:19:35.274 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:35.274 killing process with pid 80487 00:19:35.274 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:35.274 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80487' 00:19:35.274 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80487 00:19:35.274 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80487 00:19:35.532 00:19:35.532 real 0m16.919s 00:19:35.532 user 0m32.685s 00:19:35.532 sys 0m4.631s 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:35.532 ************************************ 00:19:35.532 END TEST nvmf_digest_error 00:19:35.532 ************************************ 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:35.532 rmmod nvme_tcp 00:19:35.532 rmmod nvme_fabrics 00:19:35.532 rmmod nvme_keyring 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80487 ']' 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80487 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 80487 ']' 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 80487 00:19:35.532 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (80487) - No such process 00:19:35.532 Process with pid 80487 is not found 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 80487 is not found' 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:35.532 10:43:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:35.532 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:19:35.791 00:19:35.791 real 0m34.614s 00:19:35.791 user 1m5.767s 00:19:35.791 sys 0m9.727s 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:35.791 ************************************ 00:19:35.791 END TEST nvmf_digest 00:19:35.791 ************************************ 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.791 ************************************ 00:19:35.791 START TEST nvmf_host_multipath 00:19:35.791 ************************************ 00:19:35.791 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:36.050 * Looking for test storage... 00:19:36.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:36.050 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:36.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.051 --rc genhtml_branch_coverage=1 00:19:36.051 --rc genhtml_function_coverage=1 00:19:36.051 --rc genhtml_legend=1 00:19:36.051 --rc geninfo_all_blocks=1 00:19:36.051 --rc geninfo_unexecuted_blocks=1 00:19:36.051 00:19:36.051 ' 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:36.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.051 --rc genhtml_branch_coverage=1 00:19:36.051 --rc genhtml_function_coverage=1 00:19:36.051 --rc genhtml_legend=1 00:19:36.051 --rc geninfo_all_blocks=1 00:19:36.051 --rc geninfo_unexecuted_blocks=1 00:19:36.051 00:19:36.051 ' 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:36.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.051 --rc genhtml_branch_coverage=1 00:19:36.051 --rc genhtml_function_coverage=1 00:19:36.051 --rc genhtml_legend=1 00:19:36.051 --rc geninfo_all_blocks=1 00:19:36.051 --rc geninfo_unexecuted_blocks=1 00:19:36.051 00:19:36.051 ' 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:36.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.051 --rc genhtml_branch_coverage=1 00:19:36.051 --rc genhtml_function_coverage=1 00:19:36.051 --rc genhtml_legend=1 00:19:36.051 --rc geninfo_all_blocks=1 00:19:36.051 --rc geninfo_unexecuted_blocks=1 00:19:36.051 00:19:36.051 ' 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:36.051 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:36.051 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:36.052 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:36.052 Cannot find device "nvmf_init_br" 00:19:36.052 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:19:36.052 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:36.052 Cannot find device "nvmf_init_br2" 00:19:36.052 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:19:36.052 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:36.052 Cannot find device "nvmf_tgt_br" 00:19:36.052 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:19:36.052 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:36.310 Cannot find device "nvmf_tgt_br2" 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:36.310 Cannot find device "nvmf_init_br" 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:36.310 Cannot find device "nvmf_init_br2" 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:36.310 Cannot find device "nvmf_tgt_br" 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:36.310 Cannot find device "nvmf_tgt_br2" 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:36.310 Cannot find device "nvmf_br" 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:36.310 Cannot find device "nvmf_init_if" 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:36.310 Cannot find device "nvmf_init_if2" 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:36.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:36.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:36.310 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:36.311 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:36.311 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:36.311 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:36.311 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:36.311 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:36.311 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:36.311 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:36.569 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:36.569 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:19:36.569 00:19:36.569 --- 10.0.0.3 ping statistics --- 00:19:36.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.569 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:36.569 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:36.569 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:19:36.569 00:19:36.569 --- 10.0.0.4 ping statistics --- 00:19:36.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.569 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:36.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:36.569 00:19:36.569 --- 10.0.0.1 ping statistics --- 00:19:36.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.569 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:36.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:19:36.569 00:19:36.569 --- 10.0.0.2 ping statistics --- 00:19:36.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.569 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:36.569 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:36.570 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=81001 00:19:36.570 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:36.570 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 81001 00:19:36.570 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 81001 ']' 00:19:36.570 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.570 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:36.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.570 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.570 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:36.570 10:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:36.570 [2024-11-15 10:43:01.956719] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:36.570 [2024-11-15 10:43:01.956832] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.828 [2024-11-15 10:43:02.113193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:36.828 [2024-11-15 10:43:02.176305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.828 [2024-11-15 10:43:02.176372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.828 [2024-11-15 10:43:02.176386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.828 [2024-11-15 10:43:02.176397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.828 [2024-11-15 10:43:02.176407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.828 [2024-11-15 10:43:02.177626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.828 [2024-11-15 10:43:02.177640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.828 [2024-11-15 10:43:02.235048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:36.828 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:36.828 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:19:36.828 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:36.828 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:36.828 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:37.086 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.086 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81001 00:19:37.086 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:37.344 [2024-11-15 10:43:02.654377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.344 10:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:37.601 Malloc0 00:19:37.601 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:37.859 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:38.117 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:38.375 [2024-11-15 10:43:03.852485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:38.635 10:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:38.635 [2024-11-15 10:43:04.108632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:38.635 10:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81049 00:19:38.635 10:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.635 10:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81049 /var/tmp/bdevperf.sock 00:19:38.635 10:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 81049 ']' 00:19:38.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.635 10:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.635 10:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:38.635 10:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.635 10:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:38.635 10:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:38.635 10:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:40.010 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:40.010 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:19:40.010 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:40.010 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:40.639 Nvme0n1 00:19:40.639 10:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:40.898 Nvme0n1 00:19:40.898 10:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:19:40.898 10:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:41.833 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:19:41.833 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:42.091 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:42.657 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:19:42.657 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:42.657 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81100 00:19:42.657 10:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:49.246 10:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:49.246 10:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:49.246 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:49.246 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:49.246 Attaching 4 probes... 00:19:49.246 @path[10.0.0.3, 4421]: 17454 00:19:49.246 @path[10.0.0.3, 4421]: 17901 00:19:49.246 @path[10.0.0.3, 4421]: 17632 00:19:49.246 @path[10.0.0.3, 4421]: 17862 00:19:49.246 @path[10.0.0.3, 4421]: 17520 00:19:49.246 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:49.246 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:49.246 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:49.246 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:49.246 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:49.246 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:49.246 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81100 00:19:49.246 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:49.246 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:49.246 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:49.246 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:49.504 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:49.504 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81219 00:19:49.504 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:49.504 10:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:56.080 10:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:56.080 10:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:56.080 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:56.080 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:56.080 Attaching 4 probes... 00:19:56.080 @path[10.0.0.3, 4420]: 17825 00:19:56.080 @path[10.0.0.3, 4420]: 17938 00:19:56.080 @path[10.0.0.3, 4420]: 17947 00:19:56.080 @path[10.0.0.3, 4420]: 17898 00:19:56.080 @path[10.0.0.3, 4420]: 17992 00:19:56.080 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:56.080 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:56.080 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:56.080 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:56.080 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:56.080 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:56.080 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81219 00:19:56.080 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:56.080 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:56.080 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:56.080 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:56.339 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:56.339 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81330 00:19:56.339 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:56.339 10:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:02.908 10:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:02.908 10:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:02.908 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:02.908 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:02.908 Attaching 4 probes... 00:20:02.908 @path[10.0.0.3, 4421]: 15174 00:20:02.908 @path[10.0.0.3, 4421]: 17647 00:20:02.908 @path[10.0.0.3, 4421]: 17767 00:20:02.908 @path[10.0.0.3, 4421]: 17765 00:20:02.908 @path[10.0.0.3, 4421]: 17723 00:20:02.908 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:02.908 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:02.908 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:02.908 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:02.908 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:02.908 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:02.908 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81330 00:20:02.908 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:02.908 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:20:02.908 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:02.908 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:03.166 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:20:03.166 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81444 00:20:03.166 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:03.166 10:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:09.730 Attaching 4 probes... 00:20:09.730 00:20:09.730 00:20:09.730 00:20:09.730 00:20:09.730 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81444 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:20:09.730 10:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:09.730 10:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:10.298 10:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:20:10.298 10:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81562 00:20:10.298 10:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:10.298 10:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:16.871 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:16.871 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:16.871 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:16.871 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:16.871 Attaching 4 probes... 00:20:16.871 @path[10.0.0.3, 4421]: 17025 00:20:16.871 @path[10.0.0.3, 4421]: 17360 00:20:16.871 @path[10.0.0.3, 4421]: 17522 00:20:16.871 @path[10.0.0.3, 4421]: 17443 00:20:16.871 @path[10.0.0.3, 4421]: 17533 00:20:16.871 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:16.871 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:16.871 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:16.871 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:16.871 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:16.871 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:16.871 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81562 00:20:16.871 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:16.871 10:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:16.871 10:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:17.806 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:17.807 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81680 00:20:17.807 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:17.807 10:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:24.400 Attaching 4 probes... 00:20:24.400 @path[10.0.0.3, 4420]: 16956 00:20:24.400 @path[10.0.0.3, 4420]: 17405 00:20:24.400 @path[10.0.0.3, 4420]: 17424 00:20:24.400 @path[10.0.0.3, 4420]: 17279 00:20:24.400 @path[10.0.0.3, 4420]: 17252 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81680 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:24.400 [2024-11-15 10:43:49.872992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:24.400 10:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:24.965 10:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:20:31.555 10:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:20:31.555 10:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81860 00:20:31.555 10:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:31.555 10:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:36.825 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:36.825 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:37.083 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:37.083 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:37.083 Attaching 4 probes... 00:20:37.083 @path[10.0.0.3, 4421]: 17063 00:20:37.083 @path[10.0.0.3, 4421]: 17405 00:20:37.083 @path[10.0.0.3, 4421]: 17265 00:20:37.083 @path[10.0.0.3, 4421]: 17076 00:20:37.083 @path[10.0.0.3, 4421]: 17356 00:20:37.083 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:37.083 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:37.083 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81860 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81049 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 81049 ']' 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 81049 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81049 00:20:37.084 killing process with pid 81049 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81049' 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 81049 00:20:37.084 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 81049 00:20:37.350 { 00:20:37.350 "results": [ 00:20:37.350 { 00:20:37.350 "job": "Nvme0n1", 00:20:37.350 "core_mask": "0x4", 00:20:37.350 "workload": "verify", 00:20:37.350 "status": "terminated", 00:20:37.350 "verify_range": { 00:20:37.350 "start": 0, 00:20:37.350 "length": 16384 00:20:37.350 }, 00:20:37.350 "queue_depth": 128, 00:20:37.350 "io_size": 4096, 00:20:37.350 "runtime": 56.213442, 00:20:37.350 "iops": 7531.93515529613, 00:20:37.350 "mibps": 29.421621700375507, 00:20:37.350 "io_failed": 0, 00:20:37.350 "io_timeout": 0, 00:20:37.350 "avg_latency_us": 16959.697503355983, 00:20:37.350 "min_latency_us": 154.5309090909091, 00:20:37.350 "max_latency_us": 7046430.72 00:20:37.350 } 00:20:37.350 ], 00:20:37.350 "core_count": 1 00:20:37.350 } 00:20:37.350 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81049 00:20:37.350 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:37.350 [2024-11-15 10:43:04.186118] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:37.350 [2024-11-15 10:43:04.186251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81049 ] 00:20:37.350 [2024-11-15 10:43:04.338953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.350 [2024-11-15 10:43:04.416302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.350 [2024-11-15 10:43:04.483273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:37.350 Running I/O for 90 seconds... 00:20:37.350 9063.00 IOPS, 35.40 MiB/s [2024-11-15T10:44:02.848Z] 9190.00 IOPS, 35.90 MiB/s [2024-11-15T10:44:02.848Z] 9121.33 IOPS, 35.63 MiB/s [2024-11-15T10:44:02.848Z] 9079.00 IOPS, 35.46 MiB/s [2024-11-15T10:44:02.848Z] 9029.60 IOPS, 35.27 MiB/s [2024-11-15T10:44:02.848Z] 9014.00 IOPS, 35.21 MiB/s [2024-11-15T10:44:02.848Z] 8986.71 IOPS, 35.10 MiB/s [2024-11-15T10:44:02.848Z] 8968.50 IOPS, 35.03 MiB/s [2024-11-15T10:44:02.848Z] [2024-11-15 10:43:14.732604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.350 [2024-11-15 10:43:14.732687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:37.350 [2024-11-15 10:43:14.732751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.350 [2024-11-15 10:43:14.732776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.732800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.732817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.732839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.732855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.732877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.732893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.732915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.732931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.732953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.732969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.732991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.733007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.733045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.733112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.733152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.733191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.733228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.733266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.733305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.733343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.733976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.733999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.351 [2024-11-15 10:43:14.734016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.734261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.734289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.734314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.734331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.734366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.734385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.734408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.734425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.734447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.734463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.734485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.734502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.734539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.734558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.734580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.351 [2024-11-15 10:43:14.734597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:37.351 [2024-11-15 10:43:14.734620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.734637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.734659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.734675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.734698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.734715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.734736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.734753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.734775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.734791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.734814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.734830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.734861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.734878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.734900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.734917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.734940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.734956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.734979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.734995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.735034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.735073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.735112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.735151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.735190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.735229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.352 [2024-11-15 10:43:14.735886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.735936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.735976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.735998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.736015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.736037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.736053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.736076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.736092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.736114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.352 [2024-11-15 10:43:14.736130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:37.352 [2024-11-15 10:43:14.736153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.353 [2024-11-15 10:43:14.736169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.353 [2024-11-15 10:43:14.736209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.736898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.353 [2024-11-15 10:43:14.736937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.353 [2024-11-15 10:43:14.736975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.736997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.353 [2024-11-15 10:43:14.737019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.737042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.353 [2024-11-15 10:43:14.737058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.737080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.353 [2024-11-15 10:43:14.737097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.737119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.353 [2024-11-15 10:43:14.737135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.737157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.353 [2024-11-15 10:43:14.737174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.738713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.353 [2024-11-15 10:43:14.738748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.738778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.738798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.738821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.738839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.738861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.738878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.738900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.738929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.738953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.738970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.738992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.739008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.739031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.739048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.739216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.739244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.739271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.739289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.739312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.739329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.739351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.739374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:37.353 [2024-11-15 10:43:14.739397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.353 [2024-11-15 10:43:14.739415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:14.739438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:14.739455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:14.739477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:14.739494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:14.739530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:14.739551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:14.739578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:14.739607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:14.739631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:14.739648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:14.739671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:14.739687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:14.739710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:14.739726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:14.739749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:14.739766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:37.354 8954.33 IOPS, 34.98 MiB/s [2024-11-15T10:44:02.852Z] 8962.90 IOPS, 35.01 MiB/s [2024-11-15T10:44:02.852Z] 8962.64 IOPS, 35.01 MiB/s [2024-11-15T10:44:02.852Z] 8963.58 IOPS, 35.01 MiB/s [2024-11-15T10:44:02.852Z] 8963.31 IOPS, 35.01 MiB/s [2024-11-15T10:44:02.852Z] 8965.50 IOPS, 35.02 MiB/s [2024-11-15T10:44:02.852Z] 8969.40 IOPS, 35.04 MiB/s [2024-11-15T10:44:02.852Z] [2024-11-15 10:43:21.385833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.385899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.385960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.385983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.354 [2024-11-15 10:43:21.386283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.354 [2024-11-15 10:43:21.386321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.354 [2024-11-15 10:43:21.386357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.354 [2024-11-15 10:43:21.386394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.354 [2024-11-15 10:43:21.386431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.354 [2024-11-15 10:43:21.386467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.354 [2024-11-15 10:43:21.386504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.354 [2024-11-15 10:43:21.386560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.386982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.386999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.387020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.387036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.387057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.354 [2024-11-15 10:43:21.387074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:37.354 [2024-11-15 10:43:21.387096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.355 [2024-11-15 10:43:21.387112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.355 [2024-11-15 10:43:21.387149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.355 [2024-11-15 10:43:21.387194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.387969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.387985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.388007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.388024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.388046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.388062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.388083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.388099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.388122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.355 [2024-11-15 10:43:21.388138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.388187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.355 [2024-11-15 10:43:21.388209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.388231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.355 [2024-11-15 10:43:21.388248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.388271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.355 [2024-11-15 10:43:21.388287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.388309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.355 [2024-11-15 10:43:21.388325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:37.355 [2024-11-15 10:43:21.388347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.388363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.388401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.388440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.388478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.388530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.388572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.388611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.388649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.388712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.388753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.356 [2024-11-15 10:43:21.388791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.356 [2024-11-15 10:43:21.388830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.356 [2024-11-15 10:43:21.388868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.356 [2024-11-15 10:43:21.388905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.356 [2024-11-15 10:43:21.388943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.388975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.356 [2024-11-15 10:43:21.388992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.356 [2024-11-15 10:43:21.389031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.356 [2024-11-15 10:43:21.389068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.356 [2024-11-15 10:43:21.389805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.356 [2024-11-15 10:43:21.389843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.356 [2024-11-15 10:43:21.389881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.356 [2024-11-15 10:43:21.389902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.356 [2024-11-15 10:43:21.389925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.389946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.389962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.389984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.390000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.390022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.390038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.390059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.390075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.390097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.390113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.390134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.390150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.390179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.390196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.390218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.390234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.390256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.390273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.390294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.390311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.390332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.390348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.390370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.390386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.391089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:21.391143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:21.391191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:21.391237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:21.391282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:21.391328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:21.391385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:21.391433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:21.391500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.391563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.391608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.391655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.391701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.391746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.391792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.391837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:21.391865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.357 [2024-11-15 10:43:21.391883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:37.357 8412.31 IOPS, 32.86 MiB/s [2024-11-15T10:44:02.855Z] 8431.18 IOPS, 32.93 MiB/s [2024-11-15T10:44:02.855Z] 8453.44 IOPS, 33.02 MiB/s [2024-11-15T10:44:02.855Z] 8475.68 IOPS, 33.11 MiB/s [2024-11-15T10:44:02.855Z] 8496.70 IOPS, 33.19 MiB/s [2024-11-15T10:44:02.855Z] 8514.19 IOPS, 33.26 MiB/s [2024-11-15T10:44:02.855Z] 8529.73 IOPS, 33.32 MiB/s [2024-11-15T10:44:02.855Z] [2024-11-15 10:43:28.591922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:28.592004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:28.592095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:28.592120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:28.592145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:28.592162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:28.592183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:28.592199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:28.592221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:28.592237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:28.592268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:28.592284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:28.592312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:28.592328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:37.357 [2024-11-15 10:43:28.592349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.357 [2024-11-15 10:43:28.592366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.592404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.592441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.592479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.592534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.592575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.592628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.592668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.592706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.592745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.592793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.592833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.592871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.592910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.592948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.592970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.592987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.358 [2024-11-15 10:43:28.593707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.593752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.593792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.593831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.593870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.593908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.358 [2024-11-15 10:43:28.593947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:37.358 [2024-11-15 10:43:28.593969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.593986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.359 [2024-11-15 10:43:28.594307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.359 [2024-11-15 10:43:28.594347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.359 [2024-11-15 10:43:28.594386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.359 [2024-11-15 10:43:28.594425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.359 [2024-11-15 10:43:28.594463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.359 [2024-11-15 10:43:28.594501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.359 [2024-11-15 10:43:28.594554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.359 [2024-11-15 10:43:28.594593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.594977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.594994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.595038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.595059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.595083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.595100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.595134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.595153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.595176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.595193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.595215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.595232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.595255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.595271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.595293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.359 [2024-11-15 10:43:28.595310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:37.359 [2024-11-15 10:43:28.595332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.360 [2024-11-15 10:43:28.595349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.595966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.595983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.360 [2024-11-15 10:43:28.596380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.360 [2024-11-15 10:43:28.596419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.360 [2024-11-15 10:43:28.596457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.360 [2024-11-15 10:43:28.596501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.360 [2024-11-15 10:43:28.596557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.360 [2024-11-15 10:43:28.596596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.360 [2024-11-15 10:43:28.596635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.360 [2024-11-15 10:43:28.596692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.360 [2024-11-15 10:43:28.596888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:37.360 [2024-11-15 10:43:28.596911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.361 [2024-11-15 10:43:28.596927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:28.596950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.361 [2024-11-15 10:43:28.596966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:28.597706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.361 [2024-11-15 10:43:28.597737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:28.597774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:28.597793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:28.597823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:28.597841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:28.597870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:28.597886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:28.597938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:28.597962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:28.597993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:28.598010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:28.598039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:28.598056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:28.598085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:28.598102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:28.598148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:28.598170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:37.361 8239.91 IOPS, 32.19 MiB/s [2024-11-15T10:44:02.859Z] 7896.58 IOPS, 30.85 MiB/s [2024-11-15T10:44:02.859Z] 7580.72 IOPS, 29.61 MiB/s [2024-11-15T10:44:02.859Z] 7289.15 IOPS, 28.47 MiB/s [2024-11-15T10:44:02.859Z] 7019.19 IOPS, 27.42 MiB/s [2024-11-15T10:44:02.859Z] 6768.50 IOPS, 26.44 MiB/s [2024-11-15T10:44:02.859Z] 6535.10 IOPS, 25.53 MiB/s [2024-11-15T10:44:02.859Z] 6540.40 IOPS, 25.55 MiB/s [2024-11-15T10:44:02.859Z] 6608.90 IOPS, 25.82 MiB/s [2024-11-15T10:44:02.859Z] 6674.38 IOPS, 26.07 MiB/s [2024-11-15T10:44:02.859Z] 6737.09 IOPS, 26.32 MiB/s [2024-11-15T10:44:02.859Z] 6795.88 IOPS, 26.55 MiB/s [2024-11-15T10:44:02.859Z] 6852.23 IOPS, 26.77 MiB/s [2024-11-15T10:44:02.859Z] [2024-11-15 10:43:42.192066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.361 [2024-11-15 10:43:42.192829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.361 [2024-11-15 10:43:42.192866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.361 [2024-11-15 10:43:42.192905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.361 [2024-11-15 10:43:42.192953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.192976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.361 [2024-11-15 10:43:42.192993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.193015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.361 [2024-11-15 10:43:42.193032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.193054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.361 [2024-11-15 10:43:42.193070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.193092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.361 [2024-11-15 10:43:42.193109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:37.361 [2024-11-15 10:43:42.193131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.361 [2024-11-15 10:43:42.193148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.193808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.193884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.193915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.193945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.193978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.193994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.194024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.194054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.194084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.194113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.194143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.194172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.194202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.194231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.194260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.194289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.194320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.362 [2024-11-15 10:43:42.194355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.194386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.194415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.194455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.194484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.194529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.194561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.362 [2024-11-15 10:43:42.194577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.362 [2024-11-15 10:43:42.194591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.194607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.194621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.194636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.194650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.194665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.194679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.194695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.194708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.194724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.194738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.194753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.194774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.194790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.194805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.194820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.194834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.194849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.194863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.194878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.194892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.194907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.194921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.194942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.194956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.194972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.194985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.195015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.195045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.195074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.363 [2024-11-15 10:43:42.195103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.363 [2024-11-15 10:43:42.195782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.363 [2024-11-15 10:43:42.195798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.364 [2024-11-15 10:43:42.195811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.195826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.364 [2024-11-15 10:43:42.195839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.195855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.364 [2024-11-15 10:43:42.195869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.195884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.364 [2024-11-15 10:43:42.195897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.195916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.364 [2024-11-15 10:43:42.195937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.195953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.364 [2024-11-15 10:43:42.195967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.195987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.364 [2024-11-15 10:43:42.196001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.364 [2024-11-15 10:43:42.196030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.364 [2024-11-15 10:43:42.196060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:37.364 [2024-11-15 10:43:42.196088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.364 [2024-11-15 10:43:42.196117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.364 [2024-11-15 10:43:42.196146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.364 [2024-11-15 10:43:42.196175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.364 [2024-11-15 10:43:42.196203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.364 [2024-11-15 10:43:42.196232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.364 [2024-11-15 10:43:42.196261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.364 [2024-11-15 10:43:42.196300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36290 is same with the state(6) to be set 00:20:37.364 [2024-11-15 10:43:42.196342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:37.364 [2024-11-15 10:43:42.196353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:37.364 [2024-11-15 10:43:42.196364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9368 len:8 PRP1 0x0 PRP2 0x0 00:20:37.364 [2024-11-15 10:43:42.196378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:37.364 [2024-11-15 10:43:42.196408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:37.364 [2024-11-15 10:43:42.196419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:8 PRP1 0x0 PRP2 0x0 00:20:37.364 [2024-11-15 10:43:42.196433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:37.364 [2024-11-15 10:43:42.196457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:37.364 [2024-11-15 10:43:42.196468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9832 len:8 PRP1 0x0 PRP2 0x0 00:20:37.364 [2024-11-15 10:43:42.196481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:37.364 [2024-11-15 10:43:42.196504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:37.364 [2024-11-15 10:43:42.196525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9840 len:8 PRP1 0x0 PRP2 0x0 00:20:37.364 [2024-11-15 10:43:42.196540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:37.364 [2024-11-15 10:43:42.196564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:37.364 [2024-11-15 10:43:42.196574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9848 len:8 PRP1 0x0 PRP2 0x0 00:20:37.364 [2024-11-15 10:43:42.196587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:37.364 [2024-11-15 10:43:42.196610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:37.364 [2024-11-15 10:43:42.196621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:8 PRP1 0x0 PRP2 0x0 00:20:37.364 [2024-11-15 10:43:42.196634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:37.364 [2024-11-15 10:43:42.196657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:37.364 [2024-11-15 10:43:42.196668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9864 len:8 PRP1 0x0 PRP2 0x0 00:20:37.364 [2024-11-15 10:43:42.196680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:37.364 [2024-11-15 10:43:42.196703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:37.364 [2024-11-15 10:43:42.196720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9872 len:8 PRP1 0x0 PRP2 0x0 00:20:37.364 [2024-11-15 10:43:42.196735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.196748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:37.364 [2024-11-15 10:43:42.196758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:37.364 [2024-11-15 10:43:42.196768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9880 len:8 PRP1 0x0 PRP2 0x0 00:20:37.364 [2024-11-15 10:43:42.196781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.198022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:37.364 [2024-11-15 10:43:42.198106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.364 [2024-11-15 10:43:42.198130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.364 [2024-11-15 10:43:42.198167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa71d0 (9): Bad file descriptor 00:20:37.365 [2024-11-15 10:43:42.198599] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.365 [2024-11-15 10:43:42.198634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa71d0 with addr=10.0.0.3, port=4421 00:20:37.365 [2024-11-15 10:43:42.198652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa71d0 is same with the state(6) to be set 00:20:37.365 [2024-11-15 10:43:42.198686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa71d0 (9): Bad file descriptor 00:20:37.365 [2024-11-15 10:43:42.198718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:37.365 [2024-11-15 10:43:42.198735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:37.365 [2024-11-15 10:43:42.198750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:37.365 [2024-11-15 10:43:42.198763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:37.365 [2024-11-15 10:43:42.198778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:37.365 6901.92 IOPS, 26.96 MiB/s [2024-11-15T10:44:02.863Z] 6951.92 IOPS, 27.16 MiB/s [2024-11-15T10:44:02.863Z] 6993.82 IOPS, 27.32 MiB/s [2024-11-15T10:44:02.863Z] 7037.67 IOPS, 27.49 MiB/s [2024-11-15T10:44:02.863Z] 7079.52 IOPS, 27.65 MiB/s [2024-11-15T10:44:02.863Z] 7118.95 IOPS, 27.81 MiB/s [2024-11-15T10:44:02.863Z] 7154.40 IOPS, 27.95 MiB/s [2024-11-15T10:44:02.863Z] 7188.58 IOPS, 28.08 MiB/s [2024-11-15T10:44:02.863Z] 7221.93 IOPS, 28.21 MiB/s [2024-11-15T10:44:02.863Z] 7254.87 IOPS, 28.34 MiB/s [2024-11-15T10:44:02.863Z] [2024-11-15 10:43:52.269866] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:37.365 7286.35 IOPS, 28.46 MiB/s [2024-11-15T10:44:02.863Z] 7316.51 IOPS, 28.58 MiB/s [2024-11-15T10:44:02.863Z] 7342.25 IOPS, 28.68 MiB/s [2024-11-15T10:44:02.863Z] 7370.04 IOPS, 28.79 MiB/s [2024-11-15T10:44:02.863Z] 7396.72 IOPS, 28.89 MiB/s [2024-11-15T10:44:02.863Z] 7420.16 IOPS, 28.98 MiB/s [2024-11-15T10:44:02.863Z] 7444.54 IOPS, 29.08 MiB/s [2024-11-15T10:44:02.863Z] 7467.40 IOPS, 29.17 MiB/s [2024-11-15T10:44:02.863Z] 7486.89 IOPS, 29.25 MiB/s [2024-11-15T10:44:02.863Z] 7508.87 IOPS, 29.33 MiB/s [2024-11-15T10:44:02.863Z] 7529.36 IOPS, 29.41 MiB/s [2024-11-15T10:44:02.863Z] Received shutdown signal, test time was about 56.214267 seconds 00:20:37.365 00:20:37.365 Latency(us) 00:20:37.365 [2024-11-15T10:44:02.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.365 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:37.365 Verification LBA range: start 0x0 length 0x4000 00:20:37.365 Nvme0n1 : 56.21 7531.94 29.42 0.00 0.00 16959.70 154.53 7046430.72 00:20:37.365 [2024-11-15T10:44:02.863Z] =================================================================================================================== 00:20:37.365 [2024-11-15T10:44:02.863Z] Total : 7531.94 29.42 0.00 0.00 16959.70 154.53 7046430.72 00:20:37.365 10:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.932 rmmod nvme_tcp 00:20:37.932 rmmod nvme_fabrics 00:20:37.932 rmmod nvme_keyring 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 81001 ']' 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 81001 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 81001 ']' 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 81001 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81001 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:37.932 killing process with pid 81001 00:20:37.932 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81001' 00:20:37.933 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 81001 00:20:37.933 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 81001 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.191 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:20:38.450 00:20:38.450 real 1m2.424s 00:20:38.450 user 2m54.163s 00:20:38.450 sys 0m18.269s 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:38.450 ************************************ 00:20:38.450 END TEST nvmf_host_multipath 00:20:38.450 ************************************ 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.450 ************************************ 00:20:38.450 START TEST nvmf_timeout 00:20:38.450 ************************************ 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:38.450 * Looking for test storage... 00:20:38.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:38.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.450 --rc genhtml_branch_coverage=1 00:20:38.450 --rc genhtml_function_coverage=1 00:20:38.450 --rc genhtml_legend=1 00:20:38.450 --rc geninfo_all_blocks=1 00:20:38.450 --rc geninfo_unexecuted_blocks=1 00:20:38.450 00:20:38.450 ' 00:20:38.450 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:38.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.450 --rc genhtml_branch_coverage=1 00:20:38.450 --rc genhtml_function_coverage=1 00:20:38.450 --rc genhtml_legend=1 00:20:38.450 --rc geninfo_all_blocks=1 00:20:38.450 --rc geninfo_unexecuted_blocks=1 00:20:38.450 00:20:38.450 ' 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:38.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.763 --rc genhtml_branch_coverage=1 00:20:38.763 --rc genhtml_function_coverage=1 00:20:38.763 --rc genhtml_legend=1 00:20:38.763 --rc geninfo_all_blocks=1 00:20:38.763 --rc geninfo_unexecuted_blocks=1 00:20:38.763 00:20:38.763 ' 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:38.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.763 --rc genhtml_branch_coverage=1 00:20:38.763 --rc genhtml_function_coverage=1 00:20:38.763 --rc genhtml_legend=1 00:20:38.763 --rc geninfo_all_blocks=1 00:20:38.763 --rc geninfo_unexecuted_blocks=1 00:20:38.763 00:20:38.763 ' 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.763 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.763 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:38.764 Cannot find device "nvmf_init_br" 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:38.764 Cannot find device "nvmf_init_br2" 00:20:38.764 10:44:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:38.764 Cannot find device "nvmf_tgt_br" 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.764 Cannot find device "nvmf_tgt_br2" 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:38.764 Cannot find device "nvmf_init_br" 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:38.764 Cannot find device "nvmf_init_br2" 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:38.764 Cannot find device "nvmf_tgt_br" 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:38.764 Cannot find device "nvmf_tgt_br2" 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:38.764 Cannot find device "nvmf_br" 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:38.764 Cannot find device "nvmf_init_if" 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:38.764 Cannot find device "nvmf_init_if2" 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:38.764 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:39.022 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:39.022 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:20:39.022 00:20:39.022 --- 10.0.0.3 ping statistics --- 00:20:39.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.022 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:39.022 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:39.022 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:20:39.022 00:20:39.022 --- 10.0.0.4 ping statistics --- 00:20:39.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.022 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:39.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:39.022 00:20:39.022 --- 10.0.0.1 ping statistics --- 00:20:39.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.022 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:39.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:20:39.022 00:20:39.022 --- 10.0.0.2 ping statistics --- 00:20:39.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.022 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:20:39.022 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82223 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82223 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82223 ']' 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:39.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:39.023 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:39.023 [2024-11-15 10:44:04.480431] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:39.023 [2024-11-15 10:44:04.480734] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.281 [2024-11-15 10:44:04.631615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:39.281 [2024-11-15 10:44:04.700029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.281 [2024-11-15 10:44:04.700096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.281 [2024-11-15 10:44:04.700111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.281 [2024-11-15 10:44:04.700121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.281 [2024-11-15 10:44:04.700130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.281 [2024-11-15 10:44:04.701641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.281 [2024-11-15 10:44:04.701655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.281 [2024-11-15 10:44:04.758550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:39.540 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:39.540 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:20:39.540 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:39.540 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:39.540 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:39.540 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.540 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.540 10:44:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:39.798 [2024-11-15 10:44:05.176865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.798 10:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:40.056 Malloc0 00:20:40.314 10:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.572 10:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:40.572 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:40.830 [2024-11-15 10:44:06.301641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:40.831 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82266 00:20:40.831 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:40.831 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82266 /var/tmp/bdevperf.sock 00:20:40.831 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82266 ']' 00:20:40.831 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.831 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:40.831 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.831 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:40.831 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:41.089 [2024-11-15 10:44:06.381061] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:41.089 [2024-11-15 10:44:06.381172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82266 ] 00:20:41.089 [2024-11-15 10:44:06.533368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.348 [2024-11-15 10:44:06.601632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.348 [2024-11-15 10:44:06.660236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:41.348 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:41.348 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:20:41.348 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:41.613 10:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:41.880 NVMe0n1 00:20:41.880 10:44:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82282 00:20:41.880 10:44:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:41.880 10:44:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:20:42.138 Running I/O for 10 seconds... 00:20:43.075 10:44:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:43.075 6677.00 IOPS, 26.08 MiB/s [2024-11-15T10:44:08.573Z] [2024-11-15 10:44:08.546832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.075 [2024-11-15 10:44:08.546890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.075 [2024-11-15 10:44:08.546915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.075 [2024-11-15 10:44:08.546927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.075 [2024-11-15 10:44:08.546939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.546949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.546960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.546970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.546982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.546992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.547721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.547730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.548129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.548157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.548171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.548182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.548193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.548203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.548215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.548225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.548236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.548245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.548257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.548266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.548277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.548287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.548692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.548719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.548735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.548745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.548757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.548766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.548778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.548788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.076 [2024-11-15 10:44:08.548799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.076 [2024-11-15 10:44:08.548809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.548821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.548831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.548842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.548852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.548969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.548986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.548998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.549008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.549020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.549029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.549303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.549329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.549344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.549355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.549367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.549377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.549388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.549397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.549409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.549418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.549430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.549439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.549918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.549947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.549972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.549982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.549994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.550973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.550984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.551335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.551353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.551365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.551377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.551387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.551398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.551408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.551556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.551573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.551586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.551595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.551805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.551818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.551830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.077 [2024-11-15 10:44:08.551839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.077 [2024-11-15 10:44:08.551850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.551860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.551871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.552190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.552217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.552228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.552239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.552249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.552261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.552270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.552282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.552292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.552303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.552312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.552323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.552334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.552467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.552722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.552740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.552750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.552762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.552875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.552896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.552906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.553060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.553160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.553176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.553186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.553206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.553217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.553228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.553238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.553249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.553259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.553270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.553408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.553501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.553528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.553541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.553551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.553563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.553573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.553584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.553594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.553606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.554015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.554034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.554044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.554055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.554065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.554077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.554086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.554097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.554107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.554118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.554226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.554241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.554378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.554525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.554548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.554684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.554823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.554929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.554941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.554953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.554963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.554975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.554984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.554996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.555005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.555016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.555026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.555130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.555142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.555153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.555298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.555392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.078 [2024-11-15 10:44:08.555404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.078 [2024-11-15 10:44:08.555415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.079 [2024-11-15 10:44:08.555424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.555436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.555446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.555457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.555466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.555477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.555840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.555869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.555881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.555892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.555902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.555913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.555923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.555935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.555944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.555956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.555965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.556218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.556239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.556366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.556377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.556492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.556508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.556534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.556683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.556823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.556950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.556968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.557107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.557329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.079 [2024-11-15 10:44:08.557351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.557364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.079 [2024-11-15 10:44:08.557373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.557385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa5f70 is same with the state(6) to be set 00:20:43.079 [2024-11-15 10:44:08.557399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:43.079 [2024-11-15 10:44:08.557407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:43.079 [2024-11-15 10:44:08.557415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62616 len:8 PRP1 0x0 PRP2 0x0 00:20:43.079 [2024-11-15 10:44:08.557425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.557968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.079 [2024-11-15 10:44:08.557999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.558012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.079 [2024-11-15 10:44:08.558021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.558031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.079 [2024-11-15 10:44:08.558040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.558051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.079 [2024-11-15 10:44:08.558060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.079 [2024-11-15 10:44:08.558197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f38e50 is same with the state(6) to be set 00:20:43.079 [2024-11-15 10:44:08.558644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:43.079 [2024-11-15 10:44:08.558682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f38e50 (9): Bad file descriptor 00:20:43.079 [2024-11-15 10:44:08.558960] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.079 [2024-11-15 10:44:08.558995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f38e50 with addr=10.0.0.3, port=4420 00:20:43.079 [2024-11-15 10:44:08.559008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f38e50 is same with the state(6) to be set 00:20:43.079 [2024-11-15 10:44:08.559029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f38e50 (9): Bad file descriptor 00:20:43.079 [2024-11-15 10:44:08.559047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:43.079 [2024-11-15 10:44:08.559323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:43.079 [2024-11-15 10:44:08.559350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:43.079 [2024-11-15 10:44:08.559363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:43.079 [2024-11-15 10:44:08.559374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:43.079 10:44:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:20:44.966 3850.00 IOPS, 15.04 MiB/s [2024-11-15T10:44:10.721Z] 2566.67 IOPS, 10.03 MiB/s [2024-11-15T10:44:10.721Z] [2024-11-15 10:44:10.559500] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:45.223 [2024-11-15 10:44:10.559577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f38e50 with addr=10.0.0.3, port=4420 00:20:45.223 [2024-11-15 10:44:10.559594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f38e50 is same with the state(6) to be set 00:20:45.223 [2024-11-15 10:44:10.559620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f38e50 (9): Bad file descriptor 00:20:45.223 [2024-11-15 10:44:10.559640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:45.223 [2024-11-15 10:44:10.559651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:45.223 [2024-11-15 10:44:10.559662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:45.223 [2024-11-15 10:44:10.559673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:45.223 [2024-11-15 10:44:10.559684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:45.223 10:44:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:20:45.223 10:44:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:45.223 10:44:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:45.482 10:44:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:20:45.482 10:44:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:20:45.482 10:44:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:45.482 10:44:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:45.741 10:44:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:20:45.741 10:44:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:20:46.933 1925.00 IOPS, 7.52 MiB/s [2024-11-15T10:44:12.689Z] 1540.00 IOPS, 6.02 MiB/s [2024-11-15T10:44:12.689Z] [2024-11-15 10:44:12.559824] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.191 [2024-11-15 10:44:12.559897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f38e50 with addr=10.0.0.3, port=4420 00:20:47.191 [2024-11-15 10:44:12.559914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f38e50 is same with the state(6) to be set 00:20:47.191 [2024-11-15 10:44:12.559941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f38e50 (9): Bad file descriptor 00:20:47.191 [2024-11-15 10:44:12.559962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:47.191 [2024-11-15 10:44:12.559973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:47.192 [2024-11-15 10:44:12.559984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:47.192 [2024-11-15 10:44:12.559995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:47.192 [2024-11-15 10:44:12.560006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:49.111 1283.33 IOPS, 5.01 MiB/s [2024-11-15T10:44:14.609Z] 1100.00 IOPS, 4.30 MiB/s [2024-11-15T10:44:14.609Z] [2024-11-15 10:44:14.560066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:49.111 [2024-11-15 10:44:14.560145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:49.111 [2024-11-15 10:44:14.560159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:49.111 [2024-11-15 10:44:14.560170] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:20:49.111 [2024-11-15 10:44:14.560182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:50.306 962.50 IOPS, 3.76 MiB/s 00:20:50.306 Latency(us) 00:20:50.306 [2024-11-15T10:44:15.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.306 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:50.306 Verification LBA range: start 0x0 length 0x4000 00:20:50.306 NVMe0n1 : 8.16 943.97 3.69 15.69 0.00 133370.01 4230.05 7046430.72 00:20:50.306 [2024-11-15T10:44:15.804Z] =================================================================================================================== 00:20:50.306 [2024-11-15T10:44:15.804Z] Total : 943.97 3.69 15.69 0.00 133370.01 4230.05 7046430.72 00:20:50.306 { 00:20:50.306 "results": [ 00:20:50.306 { 00:20:50.306 "job": "NVMe0n1", 00:20:50.306 "core_mask": "0x4", 00:20:50.306 "workload": "verify", 00:20:50.306 "status": "finished", 00:20:50.306 "verify_range": { 00:20:50.306 "start": 0, 00:20:50.307 "length": 16384 00:20:50.307 }, 00:20:50.307 "queue_depth": 128, 00:20:50.307 "io_size": 4096, 00:20:50.307 "runtime": 8.157009, 00:20:50.307 "iops": 943.9734588989666, 00:20:50.307 "mibps": 3.687396323824088, 00:20:50.307 "io_failed": 128, 00:20:50.307 "io_timeout": 0, 00:20:50.307 "avg_latency_us": 133370.0106916895, 00:20:50.307 "min_latency_us": 4230.050909090909, 00:20:50.307 "max_latency_us": 7046430.72 00:20:50.307 } 00:20:50.307 ], 00:20:50.307 "core_count": 1 00:20:50.307 } 00:20:50.873 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:50.873 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:50.873 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:51.131 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:51.131 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:51.131 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:51.131 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:51.389 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:51.389 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82282 00:20:51.389 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82266 00:20:51.389 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82266 ']' 00:20:51.389 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82266 00:20:51.389 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:20:51.389 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:51.389 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82266 00:20:51.389 killing process with pid 82266 00:20:51.389 Received shutdown signal, test time was about 9.448468 seconds 00:20:51.389 00:20:51.389 Latency(us) 00:20:51.389 [2024-11-15T10:44:16.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.389 [2024-11-15T10:44:16.887Z] =================================================================================================================== 00:20:51.389 [2024-11-15T10:44:16.887Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.389 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:51.389 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:51.389 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82266' 00:20:51.389 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82266 00:20:51.390 10:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82266 00:20:51.648 10:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:51.907 [2024-11-15 10:44:17.333872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:51.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.907 10:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82405 00:20:51.907 10:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:51.907 10:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82405 /var/tmp/bdevperf.sock 00:20:51.907 10:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82405 ']' 00:20:51.907 10:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.907 10:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:51.907 10:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.907 10:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:51.907 10:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:52.214 [2024-11-15 10:44:17.410799] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:52.214 [2024-11-15 10:44:17.411317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82405 ] 00:20:52.214 [2024-11-15 10:44:17.564713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.214 [2024-11-15 10:44:17.636816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.214 [2024-11-15 10:44:17.695331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:53.151 10:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:53.151 10:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:20:53.151 10:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:53.408 10:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:53.667 NVMe0n1 00:20:53.667 10:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82433 00:20:53.667 10:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:53.667 10:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:53.926 Running I/O for 10 seconds... 00:20:54.861 10:44:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:55.123 6805.00 IOPS, 26.58 MiB/s [2024-11-15T10:44:20.621Z] [2024-11-15 10:44:20.380806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.123 [2024-11-15 10:44:20.381363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.381822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.123 [2024-11-15 10:44:20.382324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.123 [2024-11-15 10:44:20.382336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.382984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.382994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.383005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.383014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.383025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.383035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.383046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.383055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.383067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.383077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.383087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.383098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.383109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.383118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.383131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.383141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.124 [2024-11-15 10:44:20.383152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.124 [2024-11-15 10:44:20.383161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.383495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.383506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.384765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.385228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.385652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.386078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.386503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.386945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.125 [2024-11-15 10:44:20.387780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.125 [2024-11-15 10:44:20.387792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.387801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.387812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.387822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.387833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.387843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.387854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.387864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.387875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.387884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.387895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.387905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.387916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.387926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.387937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.387947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.387958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.387967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.387977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.387988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.387999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.388008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.388033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.388054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.388074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.388094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.388115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.388134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.388155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.388176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.126 [2024-11-15 10:44:20.388493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.126 [2024-11-15 10:44:20.388526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c83f70 is same with the state(6) to be set 00:20:55.126 [2024-11-15 10:44:20.388551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.126 [2024-11-15 10:44:20.388559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.126 [2024-11-15 10:44:20.388568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66712 len:8 PRP1 0x0 PRP2 0x0 00:20:55.126 [2024-11-15 10:44:20.388577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.126 [2024-11-15 10:44:20.388887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:55.126 [2024-11-15 10:44:20.388970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16e50 (9): Bad file descriptor 00:20:55.126 [2024-11-15 10:44:20.389077] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.126 [2024-11-15 10:44:20.389099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c16e50 with addr=10.0.0.3, port=4420 00:20:55.126 [2024-11-15 10:44:20.389111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16e50 is same with the state(6) to be set 00:20:55.126 [2024-11-15 10:44:20.389129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16e50 (9): Bad file descriptor 00:20:55.127 [2024-11-15 10:44:20.389146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:55.127 [2024-11-15 10:44:20.389156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:55.127 [2024-11-15 10:44:20.389167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:55.127 [2024-11-15 10:44:20.389178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:55.127 [2024-11-15 10:44:20.389189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:55.127 10:44:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:55.951 4106.00 IOPS, 16.04 MiB/s [2024-11-15T10:44:21.449Z] [2024-11-15 10:44:21.389315] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.951 [2024-11-15 10:44:21.389710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c16e50 with addr=10.0.0.3, port=4420 00:20:55.951 [2024-11-15 10:44:21.390119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16e50 is same with the state(6) to be set 00:20:55.951 [2024-11-15 10:44:21.390538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16e50 (9): Bad file descriptor 00:20:55.951 [2024-11-15 10:44:21.390942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:55.951 [2024-11-15 10:44:21.391339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:55.951 [2024-11-15 10:44:21.391734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:55.951 [2024-11-15 10:44:21.391950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:55.951 [2024-11-15 10:44:21.392335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:55.951 10:44:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:56.209 [2024-11-15 10:44:21.680829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:56.209 10:44:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82433 00:20:57.033 2737.33 IOPS, 10.69 MiB/s [2024-11-15T10:44:22.531Z] [2024-11-15 10:44:22.405297] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:58.934 2053.00 IOPS, 8.02 MiB/s [2024-11-15T10:44:25.365Z] 3089.80 IOPS, 12.07 MiB/s [2024-11-15T10:44:26.297Z] 4133.50 IOPS, 16.15 MiB/s [2024-11-15T10:44:27.233Z] 4891.29 IOPS, 19.11 MiB/s [2024-11-15T10:44:28.230Z] 5475.25 IOPS, 21.39 MiB/s [2024-11-15T10:44:29.604Z] 5919.33 IOPS, 23.12 MiB/s [2024-11-15T10:44:29.604Z] 6253.70 IOPS, 24.43 MiB/s 00:21:04.106 Latency(us) 00:21:04.106 [2024-11-15T10:44:29.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.106 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:04.106 Verification LBA range: start 0x0 length 0x4000 00:21:04.106 NVMe0n1 : 10.01 6257.78 24.44 0.00 0.00 20424.37 1660.74 3035150.89 00:21:04.106 [2024-11-15T10:44:29.604Z] =================================================================================================================== 00:21:04.106 [2024-11-15T10:44:29.604Z] Total : 6257.78 24.44 0.00 0.00 20424.37 1660.74 3035150.89 00:21:04.106 { 00:21:04.106 "results": [ 00:21:04.106 { 00:21:04.106 "job": "NVMe0n1", 00:21:04.106 "core_mask": "0x4", 00:21:04.106 "workload": "verify", 00:21:04.106 "status": "finished", 00:21:04.106 "verify_range": { 00:21:04.106 "start": 0, 00:21:04.106 "length": 16384 00:21:04.106 }, 00:21:04.106 "queue_depth": 128, 00:21:04.106 "io_size": 4096, 00:21:04.106 "runtime": 10.009295, 00:21:04.106 "iops": 6257.78339033868, 00:21:04.106 "mibps": 24.44446636851047, 00:21:04.106 "io_failed": 0, 00:21:04.106 "io_timeout": 0, 00:21:04.106 "avg_latency_us": 20424.366425813794, 00:21:04.106 "min_latency_us": 1660.7418181818182, 00:21:04.106 "max_latency_us": 3035150.8945454545 00:21:04.106 } 00:21:04.106 ], 00:21:04.106 "core_count": 1 00:21:04.106 } 00:21:04.106 10:44:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82533 00:21:04.106 10:44:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.106 10:44:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:21:04.106 Running I/O for 10 seconds... 00:21:05.045 10:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:05.045 6692.00 IOPS, 26.14 MiB/s [2024-11-15T10:44:30.543Z] [2024-11-15 10:44:30.461091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:05.045 [2024-11-15 10:44:30.461156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:05.045 [2024-11-15 10:44:30.461192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:05.045 [2024-11-15 10:44:30.461214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:05.045 [2024-11-15 10:44:30.461236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:05.045 [2024-11-15 10:44:30.461257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:05.045 [2024-11-15 10:44:30.461278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:05.045 [2024-11-15 10:44:30.461299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:05.045 [2024-11-15 10:44:30.461674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:05.045 [2024-11-15 10:44:30.461733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.461911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.461922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:05.045 [2024-11-15 10:44:30.462284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.462317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.462329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.045 [2024-11-15 10:44:30.462341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.045 [2024-11-15 10:44:30.462351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.462718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.462728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.046 [2024-11-15 10:44:30.463489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.046 [2024-11-15 10:44:30.463498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.463858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.463997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.047 [2024-11-15 10:44:30.464759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.047 [2024-11-15 10:44:30.464769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.464780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.048 [2024-11-15 10:44:30.465028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.048 [2024-11-15 10:44:30.465113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.048 [2024-11-15 10:44:30.465140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.048 [2024-11-15 10:44:30.465160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.048 [2024-11-15 10:44:30.465189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.048 [2024-11-15 10:44:30.465210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.048 [2024-11-15 10:44:30.465230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.048 [2024-11-15 10:44:30.465251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.048 [2024-11-15 10:44:30.465271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.048 [2024-11-15 10:44:30.465291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.048 [2024-11-15 10:44:30.465313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.048 [2024-11-15 10:44:30.465333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.048 [2024-11-15 10:44:30.465353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c85150 is same with the state(6) to be set 00:21:05.048 [2024-11-15 10:44:30.465389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:05.048 [2024-11-15 10:44:30.465398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:05.048 [2024-11-15 10:44:30.465407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60496 len:8 PRP1 0x0 PRP2 0x0 00:21:05.048 [2024-11-15 10:44:30.465416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.465935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.048 [2024-11-15 10:44:30.466312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.466831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.048 [2024-11-15 10:44:30.466852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.466863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.048 [2024-11-15 10:44:30.466872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.466882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.048 [2024-11-15 10:44:30.466891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.048 [2024-11-15 10:44:30.466901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16e50 is same with the state(6) to be set 00:21:05.048 [2024-11-15 10:44:30.467123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:05.048 [2024-11-15 10:44:30.467147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16e50 (9): Bad file descriptor 00:21:05.048 [2024-11-15 10:44:30.467250] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.048 [2024-11-15 10:44:30.467274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c16e50 with addr=10.0.0.3, port=4420 00:21:05.048 [2024-11-15 10:44:30.467285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16e50 is same with the state(6) to be set 00:21:05.048 [2024-11-15 10:44:30.467303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16e50 (9): Bad file descriptor 00:21:05.048 [2024-11-15 10:44:30.467320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:05.048 [2024-11-15 10:44:30.467330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:05.048 [2024-11-15 10:44:30.467341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:05.048 [2024-11-15 10:44:30.467351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:05.048 [2024-11-15 10:44:30.467362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:05.048 10:44:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:21:05.983 3722.50 IOPS, 14.54 MiB/s [2024-11-15T10:44:31.481Z] [2024-11-15 10:44:31.467532] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.983 [2024-11-15 10:44:31.467614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c16e50 with addr=10.0.0.3, port=4420 00:21:05.983 [2024-11-15 10:44:31.467632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16e50 is same with the state(6) to be set 00:21:05.983 [2024-11-15 10:44:31.467661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16e50 (9): Bad file descriptor 00:21:05.983 [2024-11-15 10:44:31.467682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:05.983 [2024-11-15 10:44:31.467694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:05.983 [2024-11-15 10:44:31.467706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:05.983 [2024-11-15 10:44:31.467718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:05.983 [2024-11-15 10:44:31.467730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:07.177 2481.67 IOPS, 9.69 MiB/s [2024-11-15T10:44:32.675Z] [2024-11-15 10:44:32.467863] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.177 [2024-11-15 10:44:32.467936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c16e50 with addr=10.0.0.3, port=4420 00:21:07.177 [2024-11-15 10:44:32.467955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16e50 is same with the state(6) to be set 00:21:07.177 [2024-11-15 10:44:32.467981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16e50 (9): Bad file descriptor 00:21:07.177 [2024-11-15 10:44:32.468002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:07.177 [2024-11-15 10:44:32.468013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:07.177 [2024-11-15 10:44:32.468024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:07.177 [2024-11-15 10:44:32.468036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:07.177 [2024-11-15 10:44:32.468048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:08.159 1861.25 IOPS, 7.27 MiB/s [2024-11-15T10:44:33.657Z] [2024-11-15 10:44:33.471891] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.159 [2024-11-15 10:44:33.472306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c16e50 with addr=10.0.0.3, port=4420 00:21:08.159 [2024-11-15 10:44:33.472333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16e50 is same with the state(6) to be set 00:21:08.159 [2024-11-15 10:44:33.472619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16e50 (9): Bad file descriptor 00:21:08.159 [2024-11-15 10:44:33.472869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:08.159 [2024-11-15 10:44:33.472884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:08.159 [2024-11-15 10:44:33.472896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:08.159 [2024-11-15 10:44:33.472908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:08.159 [2024-11-15 10:44:33.472920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:08.159 10:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:08.417 [2024-11-15 10:44:33.753852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:08.417 10:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82533 00:21:09.242 1489.00 IOPS, 5.82 MiB/s [2024-11-15T10:44:34.740Z] [2024-11-15 10:44:34.503244] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:21:11.112 2556.67 IOPS, 9.99 MiB/s [2024-11-15T10:44:37.546Z] 3541.14 IOPS, 13.83 MiB/s [2024-11-15T10:44:38.481Z] 4266.50 IOPS, 16.67 MiB/s [2024-11-15T10:44:39.415Z] 4844.89 IOPS, 18.93 MiB/s [2024-11-15T10:44:39.415Z] 5317.20 IOPS, 20.77 MiB/s 00:21:13.917 Latency(us) 00:21:13.917 [2024-11-15T10:44:39.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.917 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:13.917 Verification LBA range: start 0x0 length 0x4000 00:21:13.917 NVMe0n1 : 10.01 5323.38 20.79 3617.99 0.00 14280.36 714.94 3019898.88 00:21:13.917 [2024-11-15T10:44:39.415Z] =================================================================================================================== 00:21:13.917 [2024-11-15T10:44:39.415Z] Total : 5323.38 20.79 3617.99 0.00 14280.36 0.00 3019898.88 00:21:13.917 { 00:21:13.917 "results": [ 00:21:13.917 { 00:21:13.917 "job": "NVMe0n1", 00:21:13.917 "core_mask": "0x4", 00:21:13.917 "workload": "verify", 00:21:13.917 "status": "finished", 00:21:13.917 "verify_range": { 00:21:13.917 "start": 0, 00:21:13.917 "length": 16384 00:21:13.917 }, 00:21:13.917 "queue_depth": 128, 00:21:13.917 "io_size": 4096, 00:21:13.917 "runtime": 10.009436, 00:21:13.917 "iops": 5323.3768615934005, 00:21:13.917 "mibps": 20.79444086559922, 00:21:13.917 "io_failed": 36214, 00:21:13.917 "io_timeout": 0, 00:21:13.917 "avg_latency_us": 14280.362865681102, 00:21:13.918 "min_latency_us": 714.9381818181819, 00:21:13.918 "max_latency_us": 3019898.88 00:21:13.918 } 00:21:13.918 ], 00:21:13.918 "core_count": 1 00:21:13.918 } 00:21:13.918 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82405 00:21:13.918 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82405 ']' 00:21:13.918 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82405 00:21:13.918 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:21:13.918 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:13.918 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82405 00:21:14.176 killing process with pid 82405 00:21:14.176 Received shutdown signal, test time was about 10.000000 seconds 00:21:14.176 00:21:14.176 Latency(us) 00:21:14.176 [2024-11-15T10:44:39.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.176 [2024-11-15T10:44:39.674Z] =================================================================================================================== 00:21:14.176 [2024-11-15T10:44:39.674Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82405' 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82405 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82405 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82648 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82648 /var/tmp/bdevperf.sock 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82648 ']' 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:14.176 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:14.435 [2024-11-15 10:44:39.674850] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:21:14.435 [2024-11-15 10:44:39.674952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82648 ] 00:21:14.435 [2024-11-15 10:44:39.820671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.435 [2024-11-15 10:44:39.877473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.693 [2024-11-15 10:44:39.933446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:14.693 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:14.693 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:21:14.693 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82656 00:21:14.693 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82648 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:14.693 10:44:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:14.952 10:44:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:15.211 NVMe0n1 00:21:15.211 10:44:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82698 00:21:15.211 10:44:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:15.211 10:44:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:15.471 Running I/O for 10 seconds... 00:21:16.408 10:44:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:16.408 14986.00 IOPS, 58.54 MiB/s [2024-11-15T10:44:41.906Z] [2024-11-15 10:44:41.891150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.408 [2024-11-15 10:44:41.891786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.408 [2024-11-15 10:44:41.891797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.891806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.891817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.891826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.891838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.891848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.891859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.891868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.891879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.891888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.891899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.891908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.891919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.891928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.891939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.891950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.891961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.891971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.891996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.892986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.892998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.893984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.893993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.894005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.894030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.894042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.894052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.894063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.894072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.894101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.894110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.894122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.894132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.894143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.894152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.894172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.894182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.894209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.894218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.409 [2024-11-15 10:44:41.894230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.409 [2024-11-15 10:44:41.894239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.894987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.894997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.895009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.895019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.895031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.895040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.895051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.895061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.895072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.895082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.895093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.895103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.895114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.895124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.895135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.895145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.895156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.895166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.895179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.410 [2024-11-15 10:44:41.895189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.895199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57e30 is same with the state(6) to be set 00:21:16.410 [2024-11-15 10:44:41.895213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.410 [2024-11-15 10:44:41.895221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.410 [2024-11-15 10:44:41.895229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33784 len:8 PRP1 0x0 PRP2 0x0 00:21:16.410 [2024-11-15 10:44:41.895243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.410 [2024-11-15 10:44:41.896323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:16.410 [2024-11-15 10:44:41.896845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xceae50 (9): Bad file descriptor 00:21:16.410 [2024-11-15 10:44:41.897366] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.410 [2024-11-15 10:44:41.897573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xceae50 with addr=10.0.0.3, port=4420 00:21:16.410 [2024-11-15 10:44:41.898103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceae50 is same with the state(6) to be set 00:21:16.410 [2024-11-15 10:44:41.898560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xceae50 (9): Bad file descriptor 00:21:16.410 [2024-11-15 10:44:41.899005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:16.410 [2024-11-15 10:44:41.899452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:16.410 [2024-11-15 10:44:41.899839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:16.410 [2024-11-15 10:44:41.900110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:16.410 [2024-11-15 10:44:41.900463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:16.669 10:44:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82698 00:21:18.267 8826.50 IOPS, 34.48 MiB/s [2024-11-15T10:44:44.024Z] 5884.33 IOPS, 22.99 MiB/s [2024-11-15T10:44:44.024Z] [2024-11-15 10:44:43.901170] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.526 [2024-11-15 10:44:43.901237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xceae50 with addr=10.0.0.3, port=4420 00:21:18.526 [2024-11-15 10:44:43.901254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceae50 is same with the state(6) to be set 00:21:18.526 [2024-11-15 10:44:43.901280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xceae50 (9): Bad file descriptor 00:21:18.526 [2024-11-15 10:44:43.901299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:18.526 [2024-11-15 10:44:43.901309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:18.526 [2024-11-15 10:44:43.901320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:18.526 [2024-11-15 10:44:43.901331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:18.526 [2024-11-15 10:44:43.901342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:20.396 4413.25 IOPS, 17.24 MiB/s [2024-11-15T10:44:46.153Z] 3530.60 IOPS, 13.79 MiB/s [2024-11-15T10:44:46.153Z] [2024-11-15 10:44:45.901491] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.655 [2024-11-15 10:44:45.901561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xceae50 with addr=10.0.0.3, port=4420 00:21:20.655 [2024-11-15 10:44:45.901579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceae50 is same with the state(6) to be set 00:21:20.655 [2024-11-15 10:44:45.901604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xceae50 (9): Bad file descriptor 00:21:20.655 [2024-11-15 10:44:45.901624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:20.655 [2024-11-15 10:44:45.901635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:20.655 [2024-11-15 10:44:45.901647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:20.655 [2024-11-15 10:44:45.901658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:20.655 [2024-11-15 10:44:45.901670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:22.599 2942.17 IOPS, 11.49 MiB/s [2024-11-15T10:44:48.097Z] 2521.86 IOPS, 9.85 MiB/s [2024-11-15T10:44:48.097Z] [2024-11-15 10:44:47.901771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:22.599 [2024-11-15 10:44:47.901843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:22.599 [2024-11-15 10:44:47.901859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:22.599 [2024-11-15 10:44:47.901870] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:21:22.599 [2024-11-15 10:44:47.901883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:23.538 2206.62 IOPS, 8.62 MiB/s 00:21:23.538 Latency(us) 00:21:23.538 [2024-11-15T10:44:49.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.538 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:23.538 NVMe0n1 : 8.18 2156.83 8.43 15.64 0.00 58855.51 7864.32 7015926.69 00:21:23.538 [2024-11-15T10:44:49.036Z] =================================================================================================================== 00:21:23.538 [2024-11-15T10:44:49.036Z] Total : 2156.83 8.43 15.64 0.00 58855.51 7864.32 7015926.69 00:21:23.538 { 00:21:23.538 "results": [ 00:21:23.538 { 00:21:23.538 "job": "NVMe0n1", 00:21:23.538 "core_mask": "0x4", 00:21:23.538 "workload": "randread", 00:21:23.538 "status": "finished", 00:21:23.538 "queue_depth": 128, 00:21:23.538 "io_size": 4096, 00:21:23.538 "runtime": 8.184713, 00:21:23.538 "iops": 2156.825779963207, 00:21:23.538 "mibps": 8.425100702981277, 00:21:23.538 "io_failed": 128, 00:21:23.538 "io_timeout": 0, 00:21:23.538 "avg_latency_us": 58855.50527457808, 00:21:23.538 "min_latency_us": 7864.32, 00:21:23.538 "max_latency_us": 7015926.69090909 00:21:23.538 } 00:21:23.538 ], 00:21:23.538 "core_count": 1 00:21:23.538 } 00:21:23.538 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:23.538 Attaching 5 probes... 00:21:23.538 1392.103023: reset bdev controller NVMe0 00:21:23.538 1393.074017: reconnect bdev controller NVMe0 00:21:23.538 3396.821747: reconnect delay bdev controller NVMe0 00:21:23.538 3396.843027: reconnect bdev controller NVMe0 00:21:23.538 5397.164099: reconnect delay bdev controller NVMe0 00:21:23.538 5397.185050: reconnect bdev controller NVMe0 00:21:23.538 7397.519421: reconnect delay bdev controller NVMe0 00:21:23.538 7397.548518: reconnect bdev controller NVMe0 00:21:23.538 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:23.538 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:23.538 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82656 00:21:23.538 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:23.538 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82648 00:21:23.538 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82648 ']' 00:21:23.538 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82648 00:21:23.538 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:21:23.538 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:23.538 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82648 00:21:23.539 killing process with pid 82648 00:21:23.539 Received shutdown signal, test time was about 8.246890 seconds 00:21:23.539 00:21:23.539 Latency(us) 00:21:23.539 [2024-11-15T10:44:49.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.539 [2024-11-15T10:44:49.037Z] =================================================================================================================== 00:21:23.539 [2024-11-15T10:44:49.037Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:23.539 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:23.539 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:23.539 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82648' 00:21:23.539 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82648 00:21:23.539 10:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82648 00:21:23.797 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.054 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:24.055 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:24.055 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:24.055 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:21:24.055 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:24.055 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:21:24.055 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.055 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:24.055 rmmod nvme_tcp 00:21:24.055 rmmod nvme_fabrics 00:21:24.055 rmmod nvme_keyring 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82223 ']' 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82223 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82223 ']' 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82223 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82223 00:21:24.312 killing process with pid 82223 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82223' 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82223 00:21:24.312 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82223 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:24.570 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:24.571 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:24.571 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:24.571 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:24.571 10:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:24.571 10:44:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:24.571 10:44:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:24.571 10:44:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:24.571 10:44:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.571 10:44:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.571 10:44:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.830 10:44:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:21:24.830 00:21:24.830 real 0m46.321s 00:21:24.830 user 2m15.352s 00:21:24.830 sys 0m5.850s 00:21:24.830 ************************************ 00:21:24.830 END TEST nvmf_timeout 00:21:24.830 ************************************ 00:21:24.830 10:44:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:24.830 10:44:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:24.830 10:44:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:21:24.830 10:44:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:24.830 ************************************ 00:21:24.830 END TEST nvmf_host 00:21:24.830 ************************************ 00:21:24.830 00:21:24.830 real 5m10.736s 00:21:24.830 user 13m34.899s 00:21:24.830 sys 1m9.053s 00:21:24.830 10:44:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:24.830 10:44:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.830 10:44:50 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:21:24.830 10:44:50 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:21:24.830 ************************************ 00:21:24.830 END TEST nvmf_tcp 00:21:24.830 ************************************ 00:21:24.830 00:21:24.830 real 13m2.606s 00:21:24.830 user 31m31.773s 00:21:24.830 sys 3m10.944s 00:21:24.830 10:44:50 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:24.830 10:44:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:24.830 10:44:50 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:21:24.830 10:44:50 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:24.830 10:44:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:24.830 10:44:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:24.830 10:44:50 -- common/autotest_common.sh@10 -- # set +x 00:21:24.830 ************************************ 00:21:24.830 START TEST nvmf_dif 00:21:24.830 ************************************ 00:21:24.830 10:44:50 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:24.830 * Looking for test storage... 00:21:24.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:24.830 10:44:50 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:24.830 10:44:50 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:21:24.830 10:44:50 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:25.089 10:44:50 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:21:25.089 10:44:50 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.089 10:44:50 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:25.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.089 --rc genhtml_branch_coverage=1 00:21:25.089 --rc genhtml_function_coverage=1 00:21:25.089 --rc genhtml_legend=1 00:21:25.089 --rc geninfo_all_blocks=1 00:21:25.089 --rc geninfo_unexecuted_blocks=1 00:21:25.089 00:21:25.089 ' 00:21:25.089 10:44:50 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:25.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.089 --rc genhtml_branch_coverage=1 00:21:25.089 --rc genhtml_function_coverage=1 00:21:25.089 --rc genhtml_legend=1 00:21:25.089 --rc geninfo_all_blocks=1 00:21:25.089 --rc geninfo_unexecuted_blocks=1 00:21:25.089 00:21:25.089 ' 00:21:25.089 10:44:50 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:25.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.089 --rc genhtml_branch_coverage=1 00:21:25.089 --rc genhtml_function_coverage=1 00:21:25.089 --rc genhtml_legend=1 00:21:25.089 --rc geninfo_all_blocks=1 00:21:25.089 --rc geninfo_unexecuted_blocks=1 00:21:25.089 00:21:25.089 ' 00:21:25.089 10:44:50 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:25.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.089 --rc genhtml_branch_coverage=1 00:21:25.089 --rc genhtml_function_coverage=1 00:21:25.089 --rc genhtml_legend=1 00:21:25.089 --rc geninfo_all_blocks=1 00:21:25.089 --rc geninfo_unexecuted_blocks=1 00:21:25.089 00:21:25.089 ' 00:21:25.089 10:44:50 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.089 10:44:50 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.089 10:44:50 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.089 10:44:50 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.089 10:44:50 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.089 10:44:50 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:21:25.089 10:44:50 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:25.089 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:25.089 10:44:50 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:21:25.089 10:44:50 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:25.089 10:44:50 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:25.089 10:44:50 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:21:25.089 10:44:50 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.089 10:44:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:25.089 10:44:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:25.089 10:44:50 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:25.090 Cannot find device "nvmf_init_br" 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@162 -- # true 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:25.090 Cannot find device "nvmf_init_br2" 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@163 -- # true 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:25.090 Cannot find device "nvmf_tgt_br" 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@164 -- # true 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:25.090 Cannot find device "nvmf_tgt_br2" 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@165 -- # true 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:25.090 Cannot find device "nvmf_init_br" 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@166 -- # true 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:25.090 Cannot find device "nvmf_init_br2" 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@167 -- # true 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:25.090 Cannot find device "nvmf_tgt_br" 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@168 -- # true 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:25.090 Cannot find device "nvmf_tgt_br2" 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@169 -- # true 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:25.090 Cannot find device "nvmf_br" 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@170 -- # true 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:25.090 Cannot find device "nvmf_init_if" 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@171 -- # true 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:25.090 Cannot find device "nvmf_init_if2" 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@172 -- # true 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:25.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@173 -- # true 00:21:25.090 10:44:50 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:25.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@174 -- # true 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:25.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:25.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:21:25.349 00:21:25.349 --- 10.0.0.3 ping statistics --- 00:21:25.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.349 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:25.349 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:25.349 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:21:25.349 00:21:25.349 --- 10.0.0.4 ping statistics --- 00:21:25.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.349 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:25.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:21:25.349 00:21:25.349 --- 10.0.0.1 ping statistics --- 00:21:25.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.349 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:25.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:21:25.349 00:21:25.349 --- 10.0.0.2 ping statistics --- 00:21:25.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.349 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:25.349 10:44:50 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:25.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:25.915 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:25.915 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:25.915 10:44:51 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.915 10:44:51 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:25.915 10:44:51 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:25.915 10:44:51 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.915 10:44:51 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:25.915 10:44:51 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:25.915 10:44:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:25.915 10:44:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:21:25.915 10:44:51 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.915 10:44:51 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:25.915 10:44:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:25.915 10:44:51 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83201 00:21:25.915 10:44:51 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83201 00:21:25.915 10:44:51 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:25.915 10:44:51 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 83201 ']' 00:21:25.915 10:44:51 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.915 10:44:51 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:25.915 10:44:51 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.915 10:44:51 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:25.915 10:44:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:25.915 [2024-11-15 10:44:51.280990] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:21:25.915 [2024-11-15 10:44:51.281092] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.174 [2024-11-15 10:44:51.439728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.174 [2024-11-15 10:44:51.508611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.174 [2024-11-15 10:44:51.508688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.174 [2024-11-15 10:44:51.508714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.174 [2024-11-15 10:44:51.508724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.174 [2024-11-15 10:44:51.508733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.174 [2024-11-15 10:44:51.509211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.174 [2024-11-15 10:44:51.568364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:27.157 10:44:52 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:27.157 10:44:52 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:21:27.157 10:44:52 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:27.157 10:44:52 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:27.157 10:44:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:27.157 10:44:52 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.157 10:44:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:21:27.157 10:44:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:27.157 10:44:52 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.157 10:44:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:27.157 [2024-11-15 10:44:52.333303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.157 10:44:52 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.157 10:44:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:27.157 10:44:52 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:27.157 10:44:52 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:27.157 10:44:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:27.157 ************************************ 00:21:27.157 START TEST fio_dif_1_default 00:21:27.157 ************************************ 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:27.157 bdev_null0 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:27.157 [2024-11-15 10:44:52.377404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:27.157 10:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:27.157 { 00:21:27.157 "params": { 00:21:27.157 "name": "Nvme$subsystem", 00:21:27.157 "trtype": "$TEST_TRANSPORT", 00:21:27.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.158 "adrfam": "ipv4", 00:21:27.158 "trsvcid": "$NVMF_PORT", 00:21:27.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.158 "hdgst": ${hdgst:-false}, 00:21:27.158 "ddgst": ${ddgst:-false} 00:21:27.158 }, 00:21:27.158 "method": "bdev_nvme_attach_controller" 00:21:27.158 } 00:21:27.158 EOF 00:21:27.158 )") 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:27.158 "params": { 00:21:27.158 "name": "Nvme0", 00:21:27.158 "trtype": "tcp", 00:21:27.158 "traddr": "10.0.0.3", 00:21:27.158 "adrfam": "ipv4", 00:21:27.158 "trsvcid": "4420", 00:21:27.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:27.158 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:27.158 "hdgst": false, 00:21:27.158 "ddgst": false 00:21:27.158 }, 00:21:27.158 "method": "bdev_nvme_attach_controller" 00:21:27.158 }' 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:27.158 10:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:27.158 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:27.158 fio-3.35 00:21:27.158 Starting 1 thread 00:21:39.442 00:21:39.442 filename0: (groupid=0, jobs=1): err= 0: pid=83268: Fri Nov 15 10:45:03 2024 00:21:39.442 read: IOPS=8568, BW=33.5MiB/s (35.1MB/s)(335MiB/10001msec) 00:21:39.442 slat (nsec): min=6461, max=71211, avg=8912.77, stdev=3378.12 00:21:39.442 clat (usec): min=350, max=5091, avg=440.48, stdev=44.42 00:21:39.442 lat (usec): min=357, max=5130, avg=449.39, stdev=45.03 00:21:39.442 clat percentiles (usec): 00:21:39.442 | 1.00th=[ 379], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 420], 00:21:39.442 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 445], 00:21:39.442 | 70.00th=[ 453], 80.00th=[ 461], 90.00th=[ 474], 95.00th=[ 486], 00:21:39.442 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 578], 99.95th=[ 611], 00:21:39.442 | 99.99th=[ 1057] 00:21:39.442 bw ( KiB/s): min=33344, max=34848, per=100.00%, avg=34342.74, stdev=414.71, samples=19 00:21:39.442 iops : min= 8336, max= 8712, avg=8585.68, stdev=103.68, samples=19 00:21:39.442 lat (usec) : 500=97.55%, 750=2.42%, 1000=0.02% 00:21:39.442 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:21:39.442 cpu : usr=83.22%, sys=14.77%, ctx=25, majf=0, minf=9 00:21:39.442 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:39.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.442 issued rwts: total=85696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:39.442 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:39.442 00:21:39.442 Run status group 0 (all jobs): 00:21:39.442 READ: bw=33.5MiB/s (35.1MB/s), 33.5MiB/s-33.5MiB/s (35.1MB/s-35.1MB/s), io=335MiB (351MB), run=10001-10001msec 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.442 00:21:39.442 real 0m11.047s 00:21:39.442 user 0m9.001s 00:21:39.442 sys 0m1.746s 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:39.442 ************************************ 00:21:39.442 END TEST fio_dif_1_default 00:21:39.442 ************************************ 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:39.442 10:45:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:21:39.442 10:45:03 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:39.442 10:45:03 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:39.442 10:45:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:39.442 ************************************ 00:21:39.442 START TEST fio_dif_1_multi_subsystems 00:21:39.442 ************************************ 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:39.442 bdev_null0 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:39.442 [2024-11-15 10:45:03.478446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:39.442 bdev_null1 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:21:39.442 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:39.442 { 00:21:39.442 "params": { 00:21:39.442 "name": "Nvme$subsystem", 00:21:39.442 "trtype": "$TEST_TRANSPORT", 00:21:39.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.442 "adrfam": "ipv4", 00:21:39.442 "trsvcid": "$NVMF_PORT", 00:21:39.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.442 "hdgst": ${hdgst:-false}, 00:21:39.442 "ddgst": ${ddgst:-false} 00:21:39.442 }, 00:21:39.442 "method": "bdev_nvme_attach_controller" 00:21:39.442 } 00:21:39.442 EOF 00:21:39.443 )") 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:39.443 { 00:21:39.443 "params": { 00:21:39.443 "name": "Nvme$subsystem", 00:21:39.443 "trtype": "$TEST_TRANSPORT", 00:21:39.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.443 "adrfam": "ipv4", 00:21:39.443 "trsvcid": "$NVMF_PORT", 00:21:39.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.443 "hdgst": ${hdgst:-false}, 00:21:39.443 "ddgst": ${ddgst:-false} 00:21:39.443 }, 00:21:39.443 "method": "bdev_nvme_attach_controller" 00:21:39.443 } 00:21:39.443 EOF 00:21:39.443 )") 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:39.443 "params": { 00:21:39.443 "name": "Nvme0", 00:21:39.443 "trtype": "tcp", 00:21:39.443 "traddr": "10.0.0.3", 00:21:39.443 "adrfam": "ipv4", 00:21:39.443 "trsvcid": "4420", 00:21:39.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:39.443 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:39.443 "hdgst": false, 00:21:39.443 "ddgst": false 00:21:39.443 }, 00:21:39.443 "method": "bdev_nvme_attach_controller" 00:21:39.443 },{ 00:21:39.443 "params": { 00:21:39.443 "name": "Nvme1", 00:21:39.443 "trtype": "tcp", 00:21:39.443 "traddr": "10.0.0.3", 00:21:39.443 "adrfam": "ipv4", 00:21:39.443 "trsvcid": "4420", 00:21:39.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.443 "hdgst": false, 00:21:39.443 "ddgst": false 00:21:39.443 }, 00:21:39.443 "method": "bdev_nvme_attach_controller" 00:21:39.443 }' 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:39.443 10:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:39.443 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:39.443 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:39.443 fio-3.35 00:21:39.443 Starting 2 threads 00:21:49.436 00:21:49.436 filename0: (groupid=0, jobs=1): err= 0: pid=83428: Fri Nov 15 10:45:14 2024 00:21:49.436 read: IOPS=4747, BW=18.5MiB/s (19.4MB/s)(185MiB/10001msec) 00:21:49.436 slat (nsec): min=6904, max=63319, avg=13312.37, stdev=3968.81 00:21:49.436 clat (usec): min=416, max=1397, avg=806.11, stdev=48.43 00:21:49.436 lat (usec): min=424, max=1421, avg=819.42, stdev=49.61 00:21:49.436 clat percentiles (usec): 00:21:49.436 | 1.00th=[ 701], 5.00th=[ 725], 10.00th=[ 742], 20.00th=[ 766], 00:21:49.436 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[ 807], 60.00th=[ 816], 00:21:49.436 | 70.00th=[ 832], 80.00th=[ 840], 90.00th=[ 865], 95.00th=[ 881], 00:21:49.436 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 1004], 99.95th=[ 1303], 00:21:49.436 | 99.99th=[ 1369] 00:21:49.436 bw ( KiB/s): min=18400, max=19360, per=50.00%, avg=18991.16, stdev=247.50, samples=19 00:21:49.436 iops : min= 4600, max= 4840, avg=4747.79, stdev=61.87, samples=19 00:21:49.436 lat (usec) : 500=0.03%, 750=11.99%, 1000=87.88% 00:21:49.436 lat (msec) : 2=0.10% 00:21:49.436 cpu : usr=89.60%, sys=9.01%, ctx=29, majf=0, minf=0 00:21:49.436 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:49.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.437 issued rwts: total=47484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.437 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:49.437 filename1: (groupid=0, jobs=1): err= 0: pid=83429: Fri Nov 15 10:45:14 2024 00:21:49.437 read: IOPS=4747, BW=18.5MiB/s (19.4MB/s)(185MiB/10001msec) 00:21:49.437 slat (usec): min=6, max=282, avg=13.45, stdev= 5.26 00:21:49.437 clat (usec): min=427, max=1384, avg=805.26, stdev=40.29 00:21:49.437 lat (usec): min=441, max=1427, avg=818.71, stdev=40.63 00:21:49.437 clat percentiles (usec): 00:21:49.437 | 1.00th=[ 725], 5.00th=[ 750], 10.00th=[ 766], 20.00th=[ 775], 00:21:49.437 | 30.00th=[ 783], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 807], 00:21:49.437 | 70.00th=[ 824], 80.00th=[ 832], 90.00th=[ 848], 95.00th=[ 873], 00:21:49.437 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 1074], 99.95th=[ 1319], 00:21:49.437 | 99.99th=[ 1352] 00:21:49.437 bw ( KiB/s): min=18400, max=19328, per=50.00%, avg=18989.47, stdev=240.49, samples=19 00:21:49.437 iops : min= 4600, max= 4832, avg=4747.37, stdev=60.12, samples=19 00:21:49.437 lat (usec) : 500=0.01%, 750=3.95%, 1000=95.87% 00:21:49.437 lat (msec) : 2=0.16% 00:21:49.437 cpu : usr=89.19%, sys=9.11%, ctx=180, majf=0, minf=9 00:21:49.437 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:49.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.437 issued rwts: total=47476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.437 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:49.437 00:21:49.437 Run status group 0 (all jobs): 00:21:49.437 READ: bw=37.1MiB/s (38.9MB/s), 18.5MiB/s-18.5MiB/s (19.4MB/s-19.4MB/s), io=371MiB (389MB), run=10001-10001msec 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.437 00:21:49.437 real 0m11.199s 00:21:49.437 user 0m18.674s 00:21:49.437 sys 0m2.110s 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:49.437 10:45:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:49.437 ************************************ 00:21:49.437 END TEST fio_dif_1_multi_subsystems 00:21:49.437 ************************************ 00:21:49.437 10:45:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:49.437 10:45:14 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:49.437 10:45:14 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:49.437 10:45:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:49.437 ************************************ 00:21:49.437 START TEST fio_dif_rand_params 00:21:49.437 ************************************ 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:49.437 bdev_null0 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:49.437 [2024-11-15 10:45:14.733100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:49.437 { 00:21:49.437 "params": { 00:21:49.437 "name": "Nvme$subsystem", 00:21:49.437 "trtype": "$TEST_TRANSPORT", 00:21:49.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.437 "adrfam": "ipv4", 00:21:49.437 "trsvcid": "$NVMF_PORT", 00:21:49.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.437 "hdgst": ${hdgst:-false}, 00:21:49.437 "ddgst": ${ddgst:-false} 00:21:49.437 }, 00:21:49.437 "method": "bdev_nvme_attach_controller" 00:21:49.437 } 00:21:49.437 EOF 00:21:49.437 )") 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:49.437 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:49.438 "params": { 00:21:49.438 "name": "Nvme0", 00:21:49.438 "trtype": "tcp", 00:21:49.438 "traddr": "10.0.0.3", 00:21:49.438 "adrfam": "ipv4", 00:21:49.438 "trsvcid": "4420", 00:21:49.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.438 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:49.438 "hdgst": false, 00:21:49.438 "ddgst": false 00:21:49.438 }, 00:21:49.438 "method": "bdev_nvme_attach_controller" 00:21:49.438 }' 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:49.438 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:49.696 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:49.696 ... 00:21:49.696 fio-3.35 00:21:49.696 Starting 3 threads 00:21:56.259 00:21:56.259 filename0: (groupid=0, jobs=1): err= 0: pid=83589: Fri Nov 15 10:45:20 2024 00:21:56.259 read: IOPS=253, BW=31.7MiB/s (33.2MB/s)(159MiB/5010msec) 00:21:56.259 slat (nsec): min=7397, max=43534, avg=15394.69, stdev=5054.74 00:21:56.259 clat (usec): min=7901, max=12816, avg=11808.69, stdev=265.36 00:21:56.259 lat (usec): min=7908, max=12832, avg=11824.09, stdev=266.04 00:21:56.259 clat percentiles (usec): 00:21:56.259 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11731], 20.00th=[11731], 00:21:56.259 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:21:56.259 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:21:56.259 | 99.00th=[12256], 99.50th=[12256], 99.90th=[12780], 99.95th=[12780], 00:21:56.259 | 99.99th=[12780] 00:21:56.259 bw ( KiB/s): min=32256, max=33024, per=33.37%, avg=32409.60, stdev=323.82, samples=10 00:21:56.259 iops : min= 252, max= 258, avg=253.20, stdev= 2.53, samples=10 00:21:56.259 lat (msec) : 10=0.47%, 20=99.53% 00:21:56.259 cpu : usr=90.98%, sys=8.44%, ctx=4, majf=0, minf=0 00:21:56.259 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:56.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.259 issued rwts: total=1269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.259 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:56.259 filename0: (groupid=0, jobs=1): err= 0: pid=83590: Fri Nov 15 10:45:20 2024 00:21:56.259 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(158MiB/5006msec) 00:21:56.259 slat (nsec): min=7648, max=45065, avg=16310.25, stdev=4873.87 00:21:56.259 clat (usec): min=11651, max=12735, avg=11824.84, stdev=133.11 00:21:56.259 lat (usec): min=11659, max=12760, avg=11841.15, stdev=133.92 00:21:56.259 clat percentiles (usec): 00:21:56.259 | 1.00th=[11731], 5.00th=[11731], 10.00th=[11731], 20.00th=[11731], 00:21:56.259 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:21:56.259 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:21:56.259 | 99.00th=[12256], 99.50th=[12256], 99.90th=[12780], 99.95th=[12780], 00:21:56.259 | 99.99th=[12780] 00:21:56.259 bw ( KiB/s): min=31488, max=33024, per=33.29%, avg=32332.80, stdev=435.95, samples=10 00:21:56.259 iops : min= 246, max= 258, avg=252.60, stdev= 3.41, samples=10 00:21:56.259 lat (msec) : 20=100.00% 00:21:56.259 cpu : usr=91.03%, sys=8.39%, ctx=18, majf=0, minf=0 00:21:56.259 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:56.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.259 issued rwts: total=1266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.259 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:56.259 filename0: (groupid=0, jobs=1): err= 0: pid=83591: Fri Nov 15 10:45:20 2024 00:21:56.259 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(158MiB/5006msec) 00:21:56.259 slat (nsec): min=7590, max=54746, avg=16199.16, stdev=4952.96 00:21:56.259 clat (usec): min=11663, max=12712, avg=11823.71, stdev=126.85 00:21:56.259 lat (usec): min=11679, max=12727, avg=11839.91, stdev=127.43 00:21:56.259 clat percentiles (usec): 00:21:56.259 | 1.00th=[11731], 5.00th=[11731], 10.00th=[11731], 20.00th=[11731], 00:21:56.259 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:21:56.259 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:21:56.259 | 99.00th=[12256], 99.50th=[12256], 99.90th=[12649], 99.95th=[12649], 00:21:56.259 | 99.99th=[12649] 00:21:56.259 bw ( KiB/s): min=31488, max=33024, per=33.29%, avg=32332.80, stdev=435.95, samples=10 00:21:56.259 iops : min= 246, max= 258, avg=252.60, stdev= 3.41, samples=10 00:21:56.259 lat (msec) : 20=100.00% 00:21:56.259 cpu : usr=90.03%, sys=9.39%, ctx=9, majf=0, minf=0 00:21:56.259 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:56.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.259 issued rwts: total=1266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.259 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:56.259 00:21:56.259 Run status group 0 (all jobs): 00:21:56.259 READ: bw=94.8MiB/s (99.4MB/s), 31.6MiB/s-31.7MiB/s (33.1MB/s-33.2MB/s), io=475MiB (498MB), run=5006-5010msec 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.259 bdev_null0 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.259 [2024-11-15 10:45:20.826752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.259 bdev_null1 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:56.259 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.260 bdev_null2 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:56.260 { 00:21:56.260 "params": { 00:21:56.260 "name": "Nvme$subsystem", 00:21:56.260 "trtype": "$TEST_TRANSPORT", 00:21:56.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.260 "adrfam": "ipv4", 00:21:56.260 "trsvcid": "$NVMF_PORT", 00:21:56.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.260 "hdgst": ${hdgst:-false}, 00:21:56.260 "ddgst": ${ddgst:-false} 00:21:56.260 }, 00:21:56.260 "method": "bdev_nvme_attach_controller" 00:21:56.260 } 00:21:56.260 EOF 00:21:56.260 )") 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:56.260 { 00:21:56.260 "params": { 00:21:56.260 "name": "Nvme$subsystem", 00:21:56.260 "trtype": "$TEST_TRANSPORT", 00:21:56.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.260 "adrfam": "ipv4", 00:21:56.260 "trsvcid": "$NVMF_PORT", 00:21:56.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.260 "hdgst": ${hdgst:-false}, 00:21:56.260 "ddgst": ${ddgst:-false} 00:21:56.260 }, 00:21:56.260 "method": "bdev_nvme_attach_controller" 00:21:56.260 } 00:21:56.260 EOF 00:21:56.260 )") 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:56.260 { 00:21:56.260 "params": { 00:21:56.260 "name": "Nvme$subsystem", 00:21:56.260 "trtype": "$TEST_TRANSPORT", 00:21:56.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.260 "adrfam": "ipv4", 00:21:56.260 "trsvcid": "$NVMF_PORT", 00:21:56.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.260 "hdgst": ${hdgst:-false}, 00:21:56.260 "ddgst": ${ddgst:-false} 00:21:56.260 }, 00:21:56.260 "method": "bdev_nvme_attach_controller" 00:21:56.260 } 00:21:56.260 EOF 00:21:56.260 )") 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:56.260 "params": { 00:21:56.260 "name": "Nvme0", 00:21:56.260 "trtype": "tcp", 00:21:56.260 "traddr": "10.0.0.3", 00:21:56.260 "adrfam": "ipv4", 00:21:56.260 "trsvcid": "4420", 00:21:56.260 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:56.260 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:56.260 "hdgst": false, 00:21:56.260 "ddgst": false 00:21:56.260 }, 00:21:56.260 "method": "bdev_nvme_attach_controller" 00:21:56.260 },{ 00:21:56.260 "params": { 00:21:56.260 "name": "Nvme1", 00:21:56.260 "trtype": "tcp", 00:21:56.260 "traddr": "10.0.0.3", 00:21:56.260 "adrfam": "ipv4", 00:21:56.260 "trsvcid": "4420", 00:21:56.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:56.260 "hdgst": false, 00:21:56.260 "ddgst": false 00:21:56.260 }, 00:21:56.260 "method": "bdev_nvme_attach_controller" 00:21:56.260 },{ 00:21:56.260 "params": { 00:21:56.260 "name": "Nvme2", 00:21:56.260 "trtype": "tcp", 00:21:56.260 "traddr": "10.0.0.3", 00:21:56.260 "adrfam": "ipv4", 00:21:56.260 "trsvcid": "4420", 00:21:56.260 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:56.260 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:56.260 "hdgst": false, 00:21:56.260 "ddgst": false 00:21:56.260 }, 00:21:56.260 "method": "bdev_nvme_attach_controller" 00:21:56.260 }' 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:56.260 10:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:56.260 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:56.260 ... 00:21:56.260 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:56.260 ... 00:21:56.260 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:56.260 ... 00:21:56.260 fio-3.35 00:21:56.260 Starting 24 threads 00:22:08.513 00:22:08.513 filename0: (groupid=0, jobs=1): err= 0: pid=83686: Fri Nov 15 10:45:31 2024 00:22:08.513 read: IOPS=191, BW=766KiB/s (784kB/s)(7688KiB/10040msec) 00:22:08.513 slat (usec): min=8, max=8030, avg=22.80, stdev=243.01 00:22:08.513 clat (msec): min=13, max=155, avg=83.40, stdev=26.97 00:22:08.513 lat (msec): min=13, max=156, avg=83.43, stdev=26.97 00:22:08.513 clat percentiles (msec): 00:22:08.513 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 61], 00:22:08.513 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 87], 00:22:08.513 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 121], 00:22:08.513 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:22:08.513 | 99.99th=[ 157] 00:22:08.513 bw ( KiB/s): min= 560, max= 1536, per=4.06%, avg=762.40, stdev=215.10, samples=20 00:22:08.513 iops : min= 140, max= 384, avg=190.60, stdev=53.78, samples=20 00:22:08.513 lat (msec) : 20=0.10%, 50=13.42%, 100=52.29%, 250=34.18% 00:22:08.513 cpu : usr=33.79%, sys=1.78%, ctx=978, majf=0, minf=10 00:22:08.513 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:22:08.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 issued rwts: total=1922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.513 filename0: (groupid=0, jobs=1): err= 0: pid=83687: Fri Nov 15 10:45:31 2024 00:22:08.513 read: IOPS=198, BW=796KiB/s (815kB/s)(7960KiB/10005msec) 00:22:08.513 slat (usec): min=4, max=8027, avg=25.86, stdev=255.96 00:22:08.513 clat (msec): min=8, max=143, avg=80.33, stdev=26.63 00:22:08.513 lat (msec): min=8, max=143, avg=80.35, stdev=26.63 00:22:08.513 clat percentiles (msec): 00:22:08.513 | 1.00th=[ 22], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 56], 00:22:08.513 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:22:08.513 | 70.00th=[ 101], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 121], 00:22:08.513 | 99.00th=[ 125], 99.50th=[ 127], 99.90th=[ 138], 99.95th=[ 144], 00:22:08.513 | 99.99th=[ 144] 00:22:08.513 bw ( KiB/s): min= 640, max= 1426, per=4.22%, avg=793.42, stdev=193.68, samples=19 00:22:08.513 iops : min= 160, max= 356, avg=198.32, stdev=48.32, samples=19 00:22:08.513 lat (msec) : 10=0.20%, 20=0.45%, 50=14.77%, 100=54.77%, 250=29.80% 00:22:08.513 cpu : usr=39.42%, sys=2.22%, ctx=1313, majf=0, minf=9 00:22:08.513 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:22:08.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 issued rwts: total=1990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.513 filename0: (groupid=0, jobs=1): err= 0: pid=83688: Fri Nov 15 10:45:31 2024 00:22:08.513 read: IOPS=190, BW=762KiB/s (780kB/s)(7624KiB/10006msec) 00:22:08.513 slat (usec): min=4, max=4022, avg=17.14, stdev=91.92 00:22:08.513 clat (msec): min=8, max=156, avg=83.90, stdev=26.68 00:22:08.513 lat (msec): min=8, max=156, avg=83.91, stdev=26.68 00:22:08.513 clat percentiles (msec): 00:22:08.513 | 1.00th=[ 22], 5.00th=[ 41], 10.00th=[ 49], 20.00th=[ 62], 00:22:08.513 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 86], 00:22:08.513 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 121], 00:22:08.513 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:22:08.513 | 99.99th=[ 157] 00:22:08.513 bw ( KiB/s): min= 528, max= 1136, per=4.01%, avg=754.16, stdev=159.83, samples=19 00:22:08.513 iops : min= 132, max= 284, avg=188.53, stdev=39.97, samples=19 00:22:08.513 lat (msec) : 10=0.31%, 20=0.52%, 50=11.59%, 100=55.61%, 250=31.95% 00:22:08.513 cpu : usr=36.24%, sys=2.22%, ctx=1018, majf=0, minf=9 00:22:08.513 IO depths : 1=0.1%, 2=1.1%, 4=4.1%, 8=79.3%, 16=15.3%, 32=0.0%, >=64=0.0% 00:22:08.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 issued rwts: total=1906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.513 filename0: (groupid=0, jobs=1): err= 0: pid=83689: Fri Nov 15 10:45:31 2024 00:22:08.513 read: IOPS=200, BW=803KiB/s (823kB/s)(8056KiB/10027msec) 00:22:08.513 slat (usec): min=8, max=8029, avg=22.66, stdev=252.48 00:22:08.513 clat (msec): min=15, max=132, avg=79.50, stdev=26.28 00:22:08.513 lat (msec): min=15, max=132, avg=79.52, stdev=26.28 00:22:08.513 clat percentiles (msec): 00:22:08.513 | 1.00th=[ 19], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 58], 00:22:08.513 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:22:08.513 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 121], 00:22:08.513 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 132], 00:22:08.513 | 99.99th=[ 132] 00:22:08.513 bw ( KiB/s): min= 664, max= 1576, per=4.29%, avg=806.32, stdev=221.49, samples=19 00:22:08.513 iops : min= 166, max= 394, avg=201.58, stdev=55.37, samples=19 00:22:08.513 lat (msec) : 20=1.24%, 50=16.14%, 100=54.22%, 250=28.40% 00:22:08.513 cpu : usr=33.47%, sys=2.05%, ctx=972, majf=0, minf=9 00:22:08.513 IO depths : 1=0.1%, 2=0.1%, 4=0.9%, 8=83.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:22:08.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 issued rwts: total=2014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.513 filename0: (groupid=0, jobs=1): err= 0: pid=83690: Fri Nov 15 10:45:31 2024 00:22:08.513 read: IOPS=201, BW=806KiB/s (825kB/s)(8104KiB/10057msec) 00:22:08.513 slat (usec): min=4, max=8024, avg=25.24, stdev=308.10 00:22:08.513 clat (usec): min=1717, max=153867, avg=79202.07, stdev=33021.07 00:22:08.513 lat (usec): min=1727, max=153882, avg=79227.31, stdev=33018.69 00:22:08.513 clat percentiles (msec): 00:22:08.513 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 25], 20.00th=[ 56], 00:22:08.513 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 88], 00:22:08.513 | 70.00th=[ 107], 80.00th=[ 111], 90.00th=[ 120], 95.00th=[ 121], 00:22:08.513 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 146], 99.95th=[ 153], 00:22:08.513 | 99.99th=[ 155] 00:22:08.513 bw ( KiB/s): min= 584, max= 2656, per=4.28%, avg=804.80, stdev=447.62, samples=20 00:22:08.513 iops : min= 146, max= 664, avg=201.20, stdev=111.90, samples=20 00:22:08.513 lat (msec) : 2=0.79%, 4=1.58%, 10=3.06%, 20=3.21%, 50=10.12% 00:22:08.513 lat (msec) : 100=47.33%, 250=33.91% 00:22:08.513 cpu : usr=39.37%, sys=2.29%, ctx=1141, majf=0, minf=3 00:22:08.513 IO depths : 1=0.2%, 2=1.0%, 4=3.3%, 8=79.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:22:08.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 issued rwts: total=2026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.513 filename0: (groupid=0, jobs=1): err= 0: pid=83691: Fri Nov 15 10:45:31 2024 00:22:08.513 read: IOPS=198, BW=794KiB/s (813kB/s)(7980KiB/10050msec) 00:22:08.513 slat (usec): min=6, max=8025, avg=35.79, stdev=337.14 00:22:08.513 clat (msec): min=13, max=140, avg=80.31, stdev=26.53 00:22:08.513 lat (msec): min=13, max=140, avg=80.35, stdev=26.52 00:22:08.513 clat percentiles (msec): 00:22:08.513 | 1.00th=[ 22], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 58], 00:22:08.513 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 83], 00:22:08.513 | 70.00th=[ 100], 80.00th=[ 110], 90.00th=[ 118], 95.00th=[ 121], 00:22:08.513 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 128], 99.95th=[ 142], 00:22:08.513 | 99.99th=[ 142] 00:22:08.513 bw ( KiB/s): min= 640, max= 1656, per=4.21%, avg=791.60, stdev=232.23, samples=20 00:22:08.513 iops : min= 160, max= 414, avg=197.90, stdev=58.06, samples=20 00:22:08.513 lat (msec) : 20=0.80%, 50=13.83%, 100=55.59%, 250=29.77% 00:22:08.513 cpu : usr=39.79%, sys=2.24%, ctx=1328, majf=0, minf=9 00:22:08.513 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:22:08.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 issued rwts: total=1995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.513 filename0: (groupid=0, jobs=1): err= 0: pid=83692: Fri Nov 15 10:45:31 2024 00:22:08.513 read: IOPS=197, BW=790KiB/s (809kB/s)(7900KiB/10002msec) 00:22:08.513 slat (usec): min=7, max=8027, avg=33.03, stdev=371.35 00:22:08.513 clat (msec): min=2, max=156, avg=80.89, stdev=27.58 00:22:08.513 lat (msec): min=2, max=156, avg=80.93, stdev=27.60 00:22:08.513 clat percentiles (msec): 00:22:08.513 | 1.00th=[ 4], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 61], 00:22:08.513 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 85], 00:22:08.513 | 70.00th=[ 99], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 121], 00:22:08.513 | 99.00th=[ 123], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:22:08.513 | 99.99th=[ 157] 00:22:08.513 bw ( KiB/s): min= 528, max= 1158, per=4.09%, avg=769.16, stdev=154.99, samples=19 00:22:08.513 iops : min= 132, max= 289, avg=192.26, stdev=38.68, samples=19 00:22:08.513 lat (msec) : 4=1.11%, 10=0.66%, 20=0.61%, 50=12.00%, 100=55.95% 00:22:08.513 lat (msec) : 250=29.67% 00:22:08.513 cpu : usr=34.14%, sys=1.84%, ctx=986, majf=0, minf=9 00:22:08.513 IO depths : 1=0.1%, 2=1.0%, 4=4.2%, 8=79.6%, 16=15.2%, 32=0.0%, >=64=0.0% 00:22:08.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 complete : 0=0.0%, 4=87.9%, 8=11.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.513 issued rwts: total=1975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.513 filename0: (groupid=0, jobs=1): err= 0: pid=83693: Fri Nov 15 10:45:31 2024 00:22:08.513 read: IOPS=199, BW=796KiB/s (816kB/s)(8004KiB/10049msec) 00:22:08.513 slat (usec): min=6, max=8036, avg=28.67, stdev=295.20 00:22:08.513 clat (msec): min=14, max=152, avg=80.15, stdev=29.74 00:22:08.513 lat (msec): min=14, max=152, avg=80.18, stdev=29.76 00:22:08.513 clat percentiles (msec): 00:22:08.514 | 1.00th=[ 17], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 54], 00:22:08.514 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 85], 00:22:08.514 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 121], 00:22:08.514 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 142], 99.95th=[ 148], 00:22:08.514 | 99.99th=[ 153] 00:22:08.514 bw ( KiB/s): min= 584, max= 2047, per=4.22%, avg=793.15, stdev=320.80, samples=20 00:22:08.514 iops : min= 146, max= 511, avg=198.25, stdev=80.04, samples=20 00:22:08.514 lat (msec) : 20=3.55%, 50=13.79%, 100=49.83%, 250=32.83% 00:22:08.514 cpu : usr=40.74%, sys=2.47%, ctx=1656, majf=0, minf=9 00:22:08.514 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:22:08.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 issued rwts: total=2001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.514 filename1: (groupid=0, jobs=1): err= 0: pid=83694: Fri Nov 15 10:45:31 2024 00:22:08.514 read: IOPS=193, BW=772KiB/s (791kB/s)(7740KiB/10024msec) 00:22:08.514 slat (usec): min=5, max=8031, avg=31.78, stdev=340.66 00:22:08.514 clat (msec): min=20, max=157, avg=82.66, stdev=24.76 00:22:08.514 lat (msec): min=20, max=157, avg=82.69, stdev=24.76 00:22:08.514 clat percentiles (msec): 00:22:08.514 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 61], 00:22:08.514 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 84], 00:22:08.514 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 120], 95.00th=[ 121], 00:22:08.514 | 99.00th=[ 123], 99.50th=[ 128], 99.90th=[ 144], 99.95th=[ 157], 00:22:08.514 | 99.99th=[ 157] 00:22:08.514 bw ( KiB/s): min= 616, max= 1168, per=4.12%, avg=774.32, stdev=152.06, samples=19 00:22:08.514 iops : min= 154, max= 292, avg=193.58, stdev=38.02, samples=19 00:22:08.514 lat (msec) : 50=13.95%, 100=55.50%, 250=30.54% 00:22:08.514 cpu : usr=35.42%, sys=1.92%, ctx=1003, majf=0, minf=9 00:22:08.514 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:22:08.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 issued rwts: total=1935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.514 filename1: (groupid=0, jobs=1): err= 0: pid=83695: Fri Nov 15 10:45:31 2024 00:22:08.514 read: IOPS=196, BW=785KiB/s (804kB/s)(7868KiB/10027msec) 00:22:08.514 slat (usec): min=3, max=8034, avg=19.05, stdev=180.88 00:22:08.514 clat (msec): min=22, max=143, avg=81.44, stdev=26.25 00:22:08.514 lat (msec): min=23, max=143, avg=81.46, stdev=26.25 00:22:08.514 clat percentiles (msec): 00:22:08.514 | 1.00th=[ 29], 5.00th=[ 31], 10.00th=[ 48], 20.00th=[ 61], 00:22:08.514 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 85], 00:22:08.514 | 70.00th=[ 104], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 121], 00:22:08.514 | 99.00th=[ 125], 99.50th=[ 127], 99.90th=[ 132], 99.95th=[ 144], 00:22:08.514 | 99.99th=[ 144] 00:22:08.514 bw ( KiB/s): min= 616, max= 1520, per=4.15%, avg=780.75, stdev=209.64, samples=20 00:22:08.514 iops : min= 154, max= 380, avg=195.15, stdev=52.42, samples=20 00:22:08.514 lat (msec) : 50=15.20%, 100=54.25%, 250=30.55% 00:22:08.514 cpu : usr=35.50%, sys=2.10%, ctx=1148, majf=0, minf=9 00:22:08.514 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:22:08.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 issued rwts: total=1967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.514 filename1: (groupid=0, jobs=1): err= 0: pid=83696: Fri Nov 15 10:45:31 2024 00:22:08.514 read: IOPS=194, BW=779KiB/s (798kB/s)(7816KiB/10031msec) 00:22:08.514 slat (usec): min=4, max=8026, avg=23.52, stdev=221.97 00:22:08.514 clat (msec): min=29, max=142, avg=81.97, stdev=24.74 00:22:08.514 lat (msec): min=29, max=142, avg=81.99, stdev=24.74 00:22:08.514 clat percentiles (msec): 00:22:08.514 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 61], 00:22:08.514 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:22:08.514 | 70.00th=[ 100], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 121], 00:22:08.514 | 99.00th=[ 125], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 144], 00:22:08.514 | 99.99th=[ 144] 00:22:08.514 bw ( KiB/s): min= 640, max= 1296, per=4.12%, avg=775.20, stdev=166.42, samples=20 00:22:08.514 iops : min= 160, max= 324, avg=193.80, stdev=41.60, samples=20 00:22:08.514 lat (msec) : 50=13.10%, 100=57.63%, 250=29.27% 00:22:08.514 cpu : usr=37.58%, sys=2.03%, ctx=1121, majf=0, minf=9 00:22:08.514 IO depths : 1=0.2%, 2=0.6%, 4=1.6%, 8=82.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:22:08.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 issued rwts: total=1954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.514 filename1: (groupid=0, jobs=1): err= 0: pid=83697: Fri Nov 15 10:45:31 2024 00:22:08.514 read: IOPS=196, BW=784KiB/s (803kB/s)(7884KiB/10053msec) 00:22:08.514 slat (usec): min=4, max=5022, avg=23.02, stdev=193.18 00:22:08.514 clat (msec): min=14, max=163, avg=81.43, stdev=29.79 00:22:08.514 lat (msec): min=14, max=164, avg=81.45, stdev=29.79 00:22:08.514 clat percentiles (msec): 00:22:08.514 | 1.00th=[ 16], 5.00th=[ 23], 10.00th=[ 32], 20.00th=[ 58], 00:22:08.514 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 87], 00:22:08.514 | 70.00th=[ 106], 80.00th=[ 112], 90.00th=[ 120], 95.00th=[ 121], 00:22:08.514 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 153], 99.95th=[ 165], 00:22:08.514 | 99.99th=[ 165] 00:22:08.514 bw ( KiB/s): min= 584, max= 2056, per=4.16%, avg=782.00, stdev=319.85, samples=20 00:22:08.514 iops : min= 146, max= 514, avg=195.50, stdev=79.96, samples=20 00:22:08.514 lat (msec) : 20=2.89%, 50=13.95%, 100=47.95%, 250=35.21% 00:22:08.514 cpu : usr=42.52%, sys=2.62%, ctx=1655, majf=0, minf=9 00:22:08.514 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.0%, 16=16.4%, 32=0.0%, >=64=0.0% 00:22:08.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 issued rwts: total=1971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.514 filename1: (groupid=0, jobs=1): err= 0: pid=83698: Fri Nov 15 10:45:31 2024 00:22:08.514 read: IOPS=194, BW=779KiB/s (797kB/s)(7828KiB/10054msec) 00:22:08.514 slat (usec): min=3, max=8022, avg=21.86, stdev=221.79 00:22:08.514 clat (msec): min=2, max=155, avg=82.01, stdev=33.49 00:22:08.514 lat (msec): min=2, max=155, avg=82.03, stdev=33.49 00:22:08.514 clat percentiles (msec): 00:22:08.514 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 31], 20.00th=[ 55], 00:22:08.514 | 30.00th=[ 71], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 96], 00:22:08.514 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 121], 00:22:08.514 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 155], 99.95th=[ 157], 00:22:08.514 | 99.99th=[ 157] 00:22:08.514 bw ( KiB/s): min= 512, max= 2304, per=4.14%, avg=777.60, stdev=384.60, samples=20 00:22:08.514 iops : min= 128, max= 576, avg=194.40, stdev=96.15, samples=20 00:22:08.514 lat (msec) : 4=2.45%, 10=1.64%, 20=0.92%, 50=12.62%, 100=44.00% 00:22:08.514 lat (msec) : 250=38.38% 00:22:08.514 cpu : usr=40.17%, sys=2.72%, ctx=1335, majf=0, minf=0 00:22:08.514 IO depths : 1=0.4%, 2=1.8%, 4=6.0%, 8=76.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:22:08.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 complete : 0=0.0%, 4=89.2%, 8=9.5%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 issued rwts: total=1957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.514 filename1: (groupid=0, jobs=1): err= 0: pid=83699: Fri Nov 15 10:45:31 2024 00:22:08.514 read: IOPS=195, BW=783KiB/s (802kB/s)(7876KiB/10053msec) 00:22:08.514 slat (usec): min=5, max=8042, avg=31.38, stdev=337.91 00:22:08.514 clat (msec): min=13, max=154, avg=81.49, stdev=28.27 00:22:08.514 lat (msec): min=13, max=155, avg=81.52, stdev=28.28 00:22:08.514 clat percentiles (msec): 00:22:08.514 | 1.00th=[ 16], 5.00th=[ 26], 10.00th=[ 40], 20.00th=[ 61], 00:22:08.514 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 85], 00:22:08.514 | 70.00th=[ 107], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 121], 00:22:08.514 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 155], 00:22:08.514 | 99.99th=[ 155] 00:22:08.514 bw ( KiB/s): min= 608, max= 1880, per=4.16%, avg=781.20, stdev=278.19, samples=20 00:22:08.514 iops : min= 152, max= 470, avg=195.30, stdev=69.55, samples=20 00:22:08.514 lat (msec) : 20=1.93%, 50=14.32%, 100=50.84%, 250=32.91% 00:22:08.514 cpu : usr=36.17%, sys=1.91%, ctx=1019, majf=0, minf=9 00:22:08.514 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.7%, 16=16.3%, 32=0.0%, >=64=0.0% 00:22:08.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 issued rwts: total=1969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.514 filename1: (groupid=0, jobs=1): err= 0: pid=83700: Fri Nov 15 10:45:31 2024 00:22:08.514 read: IOPS=195, BW=783KiB/s (801kB/s)(7860KiB/10042msec) 00:22:08.514 slat (usec): min=5, max=4029, avg=21.23, stdev=157.25 00:22:08.514 clat (msec): min=14, max=144, avg=81.60, stdev=26.05 00:22:08.514 lat (msec): min=14, max=144, avg=81.62, stdev=26.06 00:22:08.514 clat percentiles (msec): 00:22:08.514 | 1.00th=[ 25], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 59], 00:22:08.514 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 84], 00:22:08.514 | 70.00th=[ 103], 80.00th=[ 110], 90.00th=[ 117], 95.00th=[ 121], 00:22:08.514 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 140], 99.95th=[ 144], 00:22:08.514 | 99.99th=[ 144] 00:22:08.514 bw ( KiB/s): min= 616, max= 1512, per=4.15%, avg=779.60, stdev=203.57, samples=20 00:22:08.514 iops : min= 154, max= 378, avg=194.90, stdev=50.89, samples=20 00:22:08.514 lat (msec) : 20=0.41%, 50=13.38%, 100=55.52%, 250=30.69% 00:22:08.514 cpu : usr=42.05%, sys=2.33%, ctx=1389, majf=0, minf=9 00:22:08.514 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:22:08.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.514 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 issued rwts: total=1965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.515 filename1: (groupid=0, jobs=1): err= 0: pid=83701: Fri Nov 15 10:45:31 2024 00:22:08.515 read: IOPS=191, BW=764KiB/s (783kB/s)(7660KiB/10023msec) 00:22:08.515 slat (usec): min=4, max=12029, avg=32.16, stdev=365.89 00:22:08.515 clat (msec): min=26, max=145, avg=83.47, stdev=26.03 00:22:08.515 lat (msec): min=26, max=145, avg=83.50, stdev=26.03 00:22:08.515 clat percentiles (msec): 00:22:08.515 | 1.00th=[ 28], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 61], 00:22:08.515 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 85], 00:22:08.515 | 70.00th=[ 107], 80.00th=[ 110], 90.00th=[ 118], 95.00th=[ 121], 00:22:08.515 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:22:08.515 | 99.99th=[ 146] 00:22:08.515 bw ( KiB/s): min= 528, max= 1280, per=4.05%, avg=761.35, stdev=171.94, samples=20 00:22:08.515 iops : min= 132, max= 320, avg=190.30, stdev=42.98, samples=20 00:22:08.515 lat (msec) : 50=13.58%, 100=54.10%, 250=32.32% 00:22:08.515 cpu : usr=36.61%, sys=2.32%, ctx=987, majf=0, minf=9 00:22:08.515 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:22:08.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 issued rwts: total=1915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.515 filename2: (groupid=0, jobs=1): err= 0: pid=83702: Fri Nov 15 10:45:31 2024 00:22:08.515 read: IOPS=195, BW=780KiB/s (799kB/s)(7848KiB/10056msec) 00:22:08.515 slat (usec): min=6, max=8023, avg=29.04, stdev=264.57 00:22:08.515 clat (msec): min=14, max=155, avg=81.79, stdev=28.63 00:22:08.515 lat (msec): min=14, max=155, avg=81.82, stdev=28.64 00:22:08.515 clat percentiles (msec): 00:22:08.515 | 1.00th=[ 17], 5.00th=[ 29], 10.00th=[ 39], 20.00th=[ 61], 00:22:08.515 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 85], 00:22:08.515 | 70.00th=[ 107], 80.00th=[ 111], 90.00th=[ 120], 95.00th=[ 121], 00:22:08.515 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 157], 00:22:08.515 | 99.99th=[ 157] 00:22:08.515 bw ( KiB/s): min= 568, max= 1888, per=4.14%, avg=777.60, stdev=283.86, samples=20 00:22:08.515 iops : min= 142, max= 472, avg=194.40, stdev=70.97, samples=20 00:22:08.515 lat (msec) : 20=3.11%, 50=11.98%, 100=50.82%, 250=34.10% 00:22:08.515 cpu : usr=39.67%, sys=2.69%, ctx=1188, majf=0, minf=9 00:22:08.515 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:22:08.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 issued rwts: total=1962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.515 filename2: (groupid=0, jobs=1): err= 0: pid=83703: Fri Nov 15 10:45:31 2024 00:22:08.515 read: IOPS=201, BW=808KiB/s (827kB/s)(8080KiB/10002msec) 00:22:08.515 slat (usec): min=4, max=8043, avg=30.70, stdev=356.41 00:22:08.515 clat (msec): min=3, max=123, avg=79.10, stdev=26.45 00:22:08.515 lat (msec): min=3, max=123, avg=79.13, stdev=26.44 00:22:08.515 clat percentiles (msec): 00:22:08.515 | 1.00th=[ 14], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 58], 00:22:08.515 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:22:08.515 | 70.00th=[ 94], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 121], 00:22:08.515 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 125], 99.95th=[ 125], 00:22:08.515 | 99.99th=[ 125] 00:22:08.515 bw ( KiB/s): min= 664, max= 1344, per=4.24%, avg=797.05, stdev=176.87, samples=19 00:22:08.515 iops : min= 166, max= 336, avg=199.26, stdev=44.22, samples=19 00:22:08.515 lat (msec) : 4=0.30%, 10=0.59%, 20=0.54%, 50=15.05%, 100=55.59% 00:22:08.515 lat (msec) : 250=27.92% 00:22:08.515 cpu : usr=34.29%, sys=1.78%, ctx=1019, majf=0, minf=9 00:22:08.515 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:22:08.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 issued rwts: total=2020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.515 filename2: (groupid=0, jobs=1): err= 0: pid=83704: Fri Nov 15 10:45:31 2024 00:22:08.515 read: IOPS=201, BW=806KiB/s (825kB/s)(8076KiB/10022msec) 00:22:08.515 slat (usec): min=4, max=8022, avg=18.51, stdev=178.27 00:22:08.515 clat (msec): min=22, max=131, avg=79.28, stdev=26.16 00:22:08.515 lat (msec): min=22, max=131, avg=79.30, stdev=26.16 00:22:08.515 clat percentiles (msec): 00:22:08.515 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 59], 00:22:08.515 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:22:08.515 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 121], 00:22:08.515 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 132], 00:22:08.515 | 99.99th=[ 132] 00:22:08.515 bw ( KiB/s): min= 664, max= 1568, per=4.27%, avg=803.30, stdev=210.48, samples=20 00:22:08.515 iops : min= 166, max= 392, avg=200.80, stdev=52.61, samples=20 00:22:08.515 lat (msec) : 50=16.79%, 100=55.52%, 250=27.69% 00:22:08.515 cpu : usr=32.57%, sys=2.01%, ctx=965, majf=0, minf=9 00:22:08.515 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:22:08.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 issued rwts: total=2019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.515 filename2: (groupid=0, jobs=1): err= 0: pid=83705: Fri Nov 15 10:45:31 2024 00:22:08.515 read: IOPS=198, BW=795KiB/s (815kB/s)(7956KiB/10002msec) 00:22:08.515 slat (usec): min=4, max=12029, avg=25.63, stdev=323.79 00:22:08.515 clat (msec): min=3, max=128, avg=80.34, stdev=25.70 00:22:08.515 lat (msec): min=3, max=128, avg=80.37, stdev=25.69 00:22:08.515 clat percentiles (msec): 00:22:08.515 | 1.00th=[ 14], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 58], 00:22:08.515 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 83], 00:22:08.515 | 70.00th=[ 96], 80.00th=[ 110], 90.00th=[ 116], 95.00th=[ 121], 00:22:08.515 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 129], 99.95th=[ 129], 00:22:08.515 | 99.99th=[ 129] 00:22:08.515 bw ( KiB/s): min= 664, max= 1088, per=4.16%, avg=782.32, stdev=136.07, samples=19 00:22:08.515 iops : min= 166, max= 272, avg=195.58, stdev=34.02, samples=19 00:22:08.515 lat (msec) : 4=0.30%, 10=0.65%, 20=0.65%, 50=12.87%, 100=56.71% 00:22:08.515 lat (msec) : 250=28.81% 00:22:08.515 cpu : usr=41.11%, sys=2.22%, ctx=1311, majf=0, minf=9 00:22:08.515 IO depths : 1=0.2%, 2=0.5%, 4=1.4%, 8=82.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:22:08.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 complete : 0=0.0%, 4=86.9%, 8=12.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 issued rwts: total=1989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.515 filename2: (groupid=0, jobs=1): err= 0: pid=83706: Fri Nov 15 10:45:31 2024 00:22:08.515 read: IOPS=192, BW=769KiB/s (788kB/s)(7728KiB/10048msec) 00:22:08.515 slat (usec): min=3, max=8024, avg=19.57, stdev=203.83 00:22:08.515 clat (msec): min=9, max=155, avg=83.06, stdev=27.69 00:22:08.515 lat (msec): min=10, max=155, avg=83.08, stdev=27.70 00:22:08.515 clat percentiles (msec): 00:22:08.515 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 47], 20.00th=[ 62], 00:22:08.515 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 90], 00:22:08.515 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 120], 95.00th=[ 121], 00:22:08.515 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 146], 99.95th=[ 157], 00:22:08.515 | 99.99th=[ 157] 00:22:08.515 bw ( KiB/s): min= 568, max= 1730, per=4.07%, avg=765.70, stdev=251.88, samples=20 00:22:08.515 iops : min= 142, max= 432, avg=191.40, stdev=62.87, samples=20 00:22:08.515 lat (msec) : 10=0.05%, 20=0.78%, 50=12.47%, 100=53.99%, 250=32.71% 00:22:08.515 cpu : usr=32.17%, sys=2.01%, ctx=997, majf=0, minf=9 00:22:08.515 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.4%, 16=16.3%, 32=0.0%, >=64=0.0% 00:22:08.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.515 filename2: (groupid=0, jobs=1): err= 0: pid=83707: Fri Nov 15 10:45:31 2024 00:22:08.515 read: IOPS=194, BW=777KiB/s (796kB/s)(7776KiB/10006msec) 00:22:08.515 slat (usec): min=4, max=4025, avg=18.97, stdev=114.31 00:22:08.515 clat (msec): min=5, max=156, avg=82.25, stdev=27.51 00:22:08.515 lat (msec): min=5, max=156, avg=82.27, stdev=27.51 00:22:08.515 clat percentiles (msec): 00:22:08.515 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 61], 00:22:08.515 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 85], 00:22:08.515 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 121], 00:22:08.515 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:22:08.515 | 99.99th=[ 157] 00:22:08.515 bw ( KiB/s): min= 512, max= 1280, per=4.09%, avg=768.89, stdev=186.20, samples=19 00:22:08.515 iops : min= 128, max= 320, avg=192.21, stdev=46.55, samples=19 00:22:08.515 lat (msec) : 10=0.31%, 20=0.36%, 50=13.99%, 100=52.88%, 250=32.46% 00:22:08.515 cpu : usr=36.78%, sys=2.00%, ctx=1153, majf=0, minf=9 00:22:08.515 IO depths : 1=0.1%, 2=1.4%, 4=5.5%, 8=78.1%, 16=14.9%, 32=0.0%, >=64=0.0% 00:22:08.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 complete : 0=0.0%, 4=88.2%, 8=10.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.515 issued rwts: total=1944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.515 filename2: (groupid=0, jobs=1): err= 0: pid=83708: Fri Nov 15 10:45:31 2024 00:22:08.515 read: IOPS=190, BW=761KiB/s (779kB/s)(7632KiB/10029msec) 00:22:08.515 slat (usec): min=8, max=5027, avg=18.52, stdev=123.74 00:22:08.515 clat (msec): min=30, max=155, avg=83.96, stdev=26.23 00:22:08.515 lat (msec): min=30, max=155, avg=83.98, stdev=26.22 00:22:08.515 clat percentiles (msec): 00:22:08.515 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 64], 00:22:08.515 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 88], 00:22:08.515 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 121], 00:22:08.515 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 157], 00:22:08.515 | 99.99th=[ 157] 00:22:08.516 bw ( KiB/s): min= 584, max= 1408, per=4.02%, avg=756.80, stdev=192.78, samples=20 00:22:08.516 iops : min= 146, max= 352, avg=189.20, stdev=48.19, samples=20 00:22:08.516 lat (msec) : 50=11.16%, 100=54.56%, 250=34.28% 00:22:08.516 cpu : usr=35.69%, sys=2.17%, ctx=1091, majf=0, minf=9 00:22:08.516 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:22:08.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.516 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.516 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.516 filename2: (groupid=0, jobs=1): err= 0: pid=83709: Fri Nov 15 10:45:31 2024 00:22:08.516 read: IOPS=200, BW=801KiB/s (821kB/s)(8020KiB/10008msec) 00:22:08.516 slat (usec): min=4, max=8043, avg=40.54, stdev=446.83 00:22:08.516 clat (msec): min=8, max=128, avg=79.67, stdev=26.08 00:22:08.516 lat (msec): min=8, max=128, avg=79.71, stdev=26.09 00:22:08.516 clat percentiles (msec): 00:22:08.516 | 1.00th=[ 22], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 58], 00:22:08.516 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:22:08.516 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 121], 00:22:08.516 | 99.00th=[ 123], 99.50th=[ 126], 99.90th=[ 129], 99.95th=[ 129], 00:22:08.516 | 99.99th=[ 129] 00:22:08.516 bw ( KiB/s): min= 664, max= 1272, per=4.23%, avg=794.42, stdev=165.14, samples=19 00:22:08.516 iops : min= 166, max= 318, avg=198.58, stdev=41.27, samples=19 00:22:08.516 lat (msec) : 10=0.30%, 20=0.65%, 50=16.26%, 100=55.11%, 250=27.68% 00:22:08.516 cpu : usr=36.08%, sys=2.10%, ctx=1135, majf=0, minf=9 00:22:08.516 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=82.1%, 16=15.4%, 32=0.0%, >=64=0.0% 00:22:08.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.516 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.516 issued rwts: total=2005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:08.516 00:22:08.516 Run status group 0 (all jobs): 00:22:08.516 READ: bw=18.3MiB/s (19.2MB/s), 761KiB/s-808KiB/s (779kB/s-827kB/s), io=185MiB (193MB), run=10002-10057msec 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 bdev_null0 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 [2024-11-15 10:45:32.293146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 bdev_null1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.516 10:45:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.516 { 00:22:08.516 "params": { 00:22:08.516 "name": "Nvme$subsystem", 00:22:08.516 "trtype": "$TEST_TRANSPORT", 00:22:08.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.516 "adrfam": "ipv4", 00:22:08.516 "trsvcid": "$NVMF_PORT", 00:22:08.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.517 "hdgst": ${hdgst:-false}, 00:22:08.517 "ddgst": ${ddgst:-false} 00:22:08.517 }, 00:22:08.517 "method": "bdev_nvme_attach_controller" 00:22:08.517 } 00:22:08.517 EOF 00:22:08.517 )") 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.517 { 00:22:08.517 "params": { 00:22:08.517 "name": "Nvme$subsystem", 00:22:08.517 "trtype": "$TEST_TRANSPORT", 00:22:08.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.517 "adrfam": "ipv4", 00:22:08.517 "trsvcid": "$NVMF_PORT", 00:22:08.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.517 "hdgst": ${hdgst:-false}, 00:22:08.517 "ddgst": ${ddgst:-false} 00:22:08.517 }, 00:22:08.517 "method": "bdev_nvme_attach_controller" 00:22:08.517 } 00:22:08.517 EOF 00:22:08.517 )") 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:08.517 "params": { 00:22:08.517 "name": "Nvme0", 00:22:08.517 "trtype": "tcp", 00:22:08.517 "traddr": "10.0.0.3", 00:22:08.517 "adrfam": "ipv4", 00:22:08.517 "trsvcid": "4420", 00:22:08.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:08.517 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:08.517 "hdgst": false, 00:22:08.517 "ddgst": false 00:22:08.517 }, 00:22:08.517 "method": "bdev_nvme_attach_controller" 00:22:08.517 },{ 00:22:08.517 "params": { 00:22:08.517 "name": "Nvme1", 00:22:08.517 "trtype": "tcp", 00:22:08.517 "traddr": "10.0.0.3", 00:22:08.517 "adrfam": "ipv4", 00:22:08.517 "trsvcid": "4420", 00:22:08.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:08.517 "hdgst": false, 00:22:08.517 "ddgst": false 00:22:08.517 }, 00:22:08.517 "method": "bdev_nvme_attach_controller" 00:22:08.517 }' 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:08.517 10:45:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:08.517 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:08.517 ... 00:22:08.517 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:08.517 ... 00:22:08.517 fio-3.35 00:22:08.517 Starting 4 threads 00:22:12.703 00:22:12.703 filename0: (groupid=0, jobs=1): err= 0: pid=83848: Fri Nov 15 10:45:38 2024 00:22:12.703 read: IOPS=2029, BW=15.9MiB/s (16.6MB/s)(79.3MiB/5001msec) 00:22:12.703 slat (nsec): min=3935, max=45923, avg=13504.89, stdev=3963.79 00:22:12.703 clat (usec): min=742, max=6225, avg=3887.73, stdev=515.78 00:22:12.703 lat (usec): min=751, max=6236, avg=3901.24, stdev=516.27 00:22:12.703 clat percentiles (usec): 00:22:12.703 | 1.00th=[ 1401], 5.00th=[ 2966], 10.00th=[ 3818], 20.00th=[ 3851], 00:22:12.703 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3884], 60.00th=[ 3916], 00:22:12.703 | 70.00th=[ 3949], 80.00th=[ 4178], 90.00th=[ 4293], 95.00th=[ 4555], 00:22:12.703 | 99.00th=[ 4686], 99.50th=[ 5014], 99.90th=[ 5800], 99.95th=[ 6063], 00:22:12.703 | 99.99th=[ 6128] 00:22:12.703 bw ( KiB/s): min=14848, max=17680, per=25.07%, avg=16368.00, stdev=986.21, samples=9 00:22:12.703 iops : min= 1856, max= 2210, avg=2046.00, stdev=123.28, samples=9 00:22:12.703 lat (usec) : 750=0.01%, 1000=0.35% 00:22:12.703 lat (msec) : 2=1.34%, 4=71.45%, 10=26.84% 00:22:12.703 cpu : usr=92.18%, sys=7.00%, ctx=17, majf=0, minf=0 00:22:12.703 IO depths : 1=0.1%, 2=22.0%, 4=51.8%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:12.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.703 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.703 issued rwts: total=10151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.703 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:12.703 filename0: (groupid=0, jobs=1): err= 0: pid=83849: Fri Nov 15 10:45:38 2024 00:22:12.703 read: IOPS=1993, BW=15.6MiB/s (16.3MB/s)(77.9MiB/5003msec) 00:22:12.703 slat (nsec): min=4044, max=40370, avg=14400.35, stdev=2950.74 00:22:12.703 clat (usec): min=1088, max=5445, avg=3957.71, stdev=382.88 00:22:12.703 lat (usec): min=1103, max=5457, avg=3972.11, stdev=382.91 00:22:12.703 clat percentiles (usec): 00:22:12.703 | 1.00th=[ 2212], 5.00th=[ 3785], 10.00th=[ 3818], 20.00th=[ 3851], 00:22:12.703 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3884], 60.00th=[ 3916], 00:22:12.703 | 70.00th=[ 3949], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4555], 00:22:12.703 | 99.00th=[ 4752], 99.50th=[ 5211], 99.90th=[ 5342], 99.95th=[ 5407], 00:22:12.703 | 99.99th=[ 5473] 00:22:12.703 bw ( KiB/s): min=14976, max=17600, per=24.61%, avg=16069.33, stdev=769.33, samples=9 00:22:12.703 iops : min= 1872, max= 2200, avg=2008.67, stdev=96.17, samples=9 00:22:12.703 lat (msec) : 2=0.09%, 4=70.50%, 10=29.41% 00:22:12.703 cpu : usr=91.74%, sys=7.48%, ctx=7, majf=0, minf=0 00:22:12.703 IO depths : 1=0.1%, 2=23.4%, 4=50.9%, 8=25.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:12.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.703 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.703 issued rwts: total=9975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.703 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:12.703 filename1: (groupid=0, jobs=1): err= 0: pid=83850: Fri Nov 15 10:45:38 2024 00:22:12.703 read: IOPS=2179, BW=17.0MiB/s (17.9MB/s)(85.2MiB/5002msec) 00:22:12.703 slat (usec): min=7, max=149, avg=12.23, stdev= 3.91 00:22:12.703 clat (usec): min=640, max=7088, avg=3629.73, stdev=752.74 00:22:12.703 lat (usec): min=648, max=7102, avg=3641.96, stdev=753.05 00:22:12.703 clat percentiles (usec): 00:22:12.703 | 1.00th=[ 1385], 5.00th=[ 1418], 10.00th=[ 2311], 20.00th=[ 3654], 00:22:12.703 | 30.00th=[ 3851], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3884], 00:22:12.703 | 70.00th=[ 3916], 80.00th=[ 3949], 90.00th=[ 4146], 95.00th=[ 4293], 00:22:12.703 | 99.00th=[ 4555], 99.50th=[ 4555], 99.90th=[ 4817], 99.95th=[ 5276], 00:22:12.703 | 99.99th=[ 6980] 00:22:12.703 bw ( KiB/s): min=15936, max=20480, per=26.21%, avg=17110.67, stdev=1410.09, samples=9 00:22:12.703 iops : min= 1992, max= 2560, avg=2138.78, stdev=176.24, samples=9 00:22:12.703 lat (usec) : 750=0.22%, 1000=0.09% 00:22:12.703 lat (msec) : 2=6.55%, 4=76.83%, 10=16.31% 00:22:12.703 cpu : usr=91.26%, sys=7.86%, ctx=8, majf=0, minf=0 00:22:12.703 IO depths : 1=0.1%, 2=16.1%, 4=55.0%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:12.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.703 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.703 issued rwts: total=10902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.703 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:12.703 filename1: (groupid=0, jobs=1): err= 0: pid=83851: Fri Nov 15 10:45:38 2024 00:22:12.703 read: IOPS=1959, BW=15.3MiB/s (16.1MB/s)(76.6MiB/5002msec) 00:22:12.703 slat (usec): min=4, max=265, avg=14.91, stdev= 4.89 00:22:12.703 clat (usec): min=1992, max=6892, avg=4022.34, stdev=294.52 00:22:12.703 lat (usec): min=2006, max=6905, avg=4037.25, stdev=294.83 00:22:12.703 clat percentiles (usec): 00:22:12.703 | 1.00th=[ 3752], 5.00th=[ 3818], 10.00th=[ 3818], 20.00th=[ 3851], 00:22:12.703 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3884], 60.00th=[ 3916], 00:22:12.703 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4490], 95.00th=[ 4621], 00:22:12.703 | 99.00th=[ 4817], 99.50th=[ 5276], 99.90th=[ 5735], 99.95th=[ 6128], 00:22:12.703 | 99.99th=[ 6915] 00:22:12.704 bw ( KiB/s): min=14848, max=16368, per=24.12%, avg=15749.33, stdev=607.00, samples=9 00:22:12.704 iops : min= 1856, max= 2046, avg=1968.67, stdev=75.87, samples=9 00:22:12.704 lat (msec) : 2=0.02%, 4=68.03%, 10=31.95% 00:22:12.704 cpu : usr=91.66%, sys=7.12%, ctx=82, majf=0, minf=0 00:22:12.704 IO depths : 1=0.1%, 2=24.9%, 4=50.1%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:12.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.704 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.704 issued rwts: total=9803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.704 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:12.704 00:22:12.704 Run status group 0 (all jobs): 00:22:12.704 READ: bw=63.8MiB/s (66.9MB/s), 15.3MiB/s-17.0MiB/s (16.1MB/s-17.9MB/s), io=319MiB (334MB), run=5001-5003msec 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.962 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:12.963 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.963 10:45:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:12.963 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.963 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:12.963 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.963 00:22:12.963 real 0m23.746s 00:22:12.963 user 2m4.071s 00:22:12.963 sys 0m8.901s 00:22:12.963 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:12.963 10:45:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:12.963 ************************************ 00:22:12.963 END TEST fio_dif_rand_params 00:22:12.963 ************************************ 00:22:13.221 10:45:38 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:13.221 10:45:38 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:13.221 10:45:38 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:13.221 10:45:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:13.221 ************************************ 00:22:13.221 START TEST fio_dif_digest 00:22:13.221 ************************************ 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:13.221 bdev_null0 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:13.221 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:13.222 [2024-11-15 10:45:38.535414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:13.222 { 00:22:13.222 "params": { 00:22:13.222 "name": "Nvme$subsystem", 00:22:13.222 "trtype": "$TEST_TRANSPORT", 00:22:13.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.222 "adrfam": "ipv4", 00:22:13.222 "trsvcid": "$NVMF_PORT", 00:22:13.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.222 "hdgst": ${hdgst:-false}, 00:22:13.222 "ddgst": ${ddgst:-false} 00:22:13.222 }, 00:22:13.222 "method": "bdev_nvme_attach_controller" 00:22:13.222 } 00:22:13.222 EOF 00:22:13.222 )") 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:13.222 "params": { 00:22:13.222 "name": "Nvme0", 00:22:13.222 "trtype": "tcp", 00:22:13.222 "traddr": "10.0.0.3", 00:22:13.222 "adrfam": "ipv4", 00:22:13.222 "trsvcid": "4420", 00:22:13.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.222 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:13.222 "hdgst": true, 00:22:13.222 "ddgst": true 00:22:13.222 }, 00:22:13.222 "method": "bdev_nvme_attach_controller" 00:22:13.222 }' 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:13.222 10:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:13.481 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:13.481 ... 00:22:13.481 fio-3.35 00:22:13.481 Starting 3 threads 00:22:25.783 00:22:25.783 filename0: (groupid=0, jobs=1): err= 0: pid=83957: Fri Nov 15 10:45:49 2024 00:22:25.783 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(278MiB/10002msec) 00:22:25.783 slat (nsec): min=7519, max=38940, avg=10379.07, stdev=3127.27 00:22:25.783 clat (usec): min=11612, max=15048, avg=13472.65, stdev=152.61 00:22:25.783 lat (usec): min=11620, max=15074, avg=13483.03, stdev=152.89 00:22:25.783 clat percentiles (usec): 00:22:25.783 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13435], 20.00th=[13435], 00:22:25.783 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13435], 60.00th=[13435], 00:22:25.783 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13698], 95.00th=[13698], 00:22:25.783 | 99.00th=[13960], 99.50th=[13960], 99.90th=[15008], 99.95th=[15008], 00:22:25.783 | 99.99th=[15008] 00:22:25.783 bw ( KiB/s): min=27648, max=29184, per=33.34%, avg=28456.42, stdev=310.77, samples=19 00:22:25.783 iops : min= 216, max= 228, avg=222.32, stdev= 2.43, samples=19 00:22:25.783 lat (msec) : 20=100.00% 00:22:25.783 cpu : usr=91.18%, sys=8.33%, ctx=13, majf=0, minf=0 00:22:25.783 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:25.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.783 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:25.783 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:25.783 filename0: (groupid=0, jobs=1): err= 0: pid=83958: Fri Nov 15 10:45:49 2024 00:22:25.783 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(278MiB/10004msec) 00:22:25.783 slat (nsec): min=7528, max=43944, avg=10946.80, stdev=3879.45 00:22:25.783 clat (usec): min=8121, max=19682, avg=13472.12, stdev=324.66 00:22:25.783 lat (usec): min=8130, max=19708, avg=13483.06, stdev=324.88 00:22:25.783 clat percentiles (usec): 00:22:25.783 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13304], 20.00th=[13435], 00:22:25.783 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13435], 60.00th=[13435], 00:22:25.783 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13566], 95.00th=[13698], 00:22:25.783 | 99.00th=[13960], 99.50th=[14091], 99.90th=[19530], 99.95th=[19792], 00:22:25.783 | 99.99th=[19792] 00:22:25.783 bw ( KiB/s): min=27648, max=29184, per=33.29%, avg=28416.00, stdev=362.04, samples=19 00:22:25.783 iops : min= 216, max= 228, avg=222.00, stdev= 2.83, samples=19 00:22:25.783 lat (msec) : 10=0.13%, 20=99.87% 00:22:25.783 cpu : usr=91.68%, sys=7.78%, ctx=6, majf=0, minf=0 00:22:25.783 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:25.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.783 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:25.783 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:25.783 filename0: (groupid=0, jobs=1): err= 0: pid=83959: Fri Nov 15 10:45:49 2024 00:22:25.783 read: IOPS=222, BW=27.8MiB/s (29.2MB/s)(278MiB/10006msec) 00:22:25.783 slat (nsec): min=5355, max=42031, avg=10818.79, stdev=3616.53 00:22:25.783 clat (usec): min=5091, max=14207, avg=13458.41, stdev=335.62 00:22:25.783 lat (usec): min=5096, max=14220, avg=13469.22, stdev=335.60 00:22:25.783 clat percentiles (usec): 00:22:25.783 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13304], 20.00th=[13435], 00:22:25.783 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13435], 60.00th=[13435], 00:22:25.783 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13566], 95.00th=[13698], 00:22:25.783 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14222], 99.95th=[14222], 00:22:25.783 | 99.99th=[14222] 00:22:25.783 bw ( KiB/s): min=27648, max=29184, per=33.38%, avg=28490.84, stdev=354.80, samples=19 00:22:25.783 iops : min= 216, max= 228, avg=222.58, stdev= 2.78, samples=19 00:22:25.783 lat (msec) : 10=0.13%, 20=99.87% 00:22:25.783 cpu : usr=91.26%, sys=8.21%, ctx=22, majf=0, minf=0 00:22:25.783 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:25.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.784 issued rwts: total=2226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:25.784 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:25.784 00:22:25.784 Run status group 0 (all jobs): 00:22:25.784 READ: bw=83.3MiB/s (87.4MB/s), 27.8MiB/s-27.8MiB/s (29.1MB/s-29.2MB/s), io=834MiB (875MB), run=10002-10006msec 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.784 00:22:25.784 real 0m11.057s 00:22:25.784 user 0m28.090s 00:22:25.784 sys 0m2.725s 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:25.784 10:45:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:25.784 ************************************ 00:22:25.784 END TEST fio_dif_digest 00:22:25.784 ************************************ 00:22:25.784 10:45:49 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:25.784 10:45:49 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:22:25.784 10:45:49 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:25.784 10:45:49 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:22:25.784 10:45:49 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:25.784 10:45:49 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:22:25.784 10:45:49 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:25.784 10:45:49 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:25.784 rmmod nvme_tcp 00:22:25.784 rmmod nvme_fabrics 00:22:25.784 rmmod nvme_keyring 00:22:25.784 10:45:49 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:25.784 10:45:49 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:22:25.784 10:45:49 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:22:25.784 10:45:49 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83201 ']' 00:22:25.784 10:45:49 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83201 00:22:25.784 10:45:49 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 83201 ']' 00:22:25.784 10:45:49 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 83201 00:22:25.784 10:45:49 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:22:25.784 10:45:49 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:25.784 10:45:49 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83201 00:22:25.784 10:45:49 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:25.784 10:45:49 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:25.784 killing process with pid 83201 00:22:25.784 10:45:49 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83201' 00:22:25.784 10:45:49 nvmf_dif -- common/autotest_common.sh@971 -- # kill 83201 00:22:25.784 10:45:49 nvmf_dif -- common/autotest_common.sh@976 -- # wait 83201 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:25.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:25.784 Waiting for block devices as requested 00:22:25.784 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:25.784 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:25.784 10:45:50 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:25.784 10:45:51 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:25.784 10:45:51 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.784 10:45:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:25.784 10:45:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.784 10:45:51 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:22:25.784 00:22:25.784 real 1m0.829s 00:22:25.784 user 3m49.046s 00:22:25.784 sys 0m20.161s 00:22:25.784 10:45:51 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:25.784 10:45:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:25.784 ************************************ 00:22:25.784 END TEST nvmf_dif 00:22:25.784 ************************************ 00:22:25.784 10:45:51 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:25.784 10:45:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:25.784 10:45:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:25.784 10:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:25.784 ************************************ 00:22:25.784 START TEST nvmf_abort_qd_sizes 00:22:25.784 ************************************ 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:25.784 * Looking for test storage... 00:22:25.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:25.784 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:22:26.043 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.043 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.043 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.043 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:22:26.043 10:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.043 10:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:26.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.043 --rc genhtml_branch_coverage=1 00:22:26.043 --rc genhtml_function_coverage=1 00:22:26.043 --rc genhtml_legend=1 00:22:26.043 --rc geninfo_all_blocks=1 00:22:26.043 --rc geninfo_unexecuted_blocks=1 00:22:26.043 00:22:26.044 ' 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:26.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.044 --rc genhtml_branch_coverage=1 00:22:26.044 --rc genhtml_function_coverage=1 00:22:26.044 --rc genhtml_legend=1 00:22:26.044 --rc geninfo_all_blocks=1 00:22:26.044 --rc geninfo_unexecuted_blocks=1 00:22:26.044 00:22:26.044 ' 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:26.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.044 --rc genhtml_branch_coverage=1 00:22:26.044 --rc genhtml_function_coverage=1 00:22:26.044 --rc genhtml_legend=1 00:22:26.044 --rc geninfo_all_blocks=1 00:22:26.044 --rc geninfo_unexecuted_blocks=1 00:22:26.044 00:22:26.044 ' 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:26.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.044 --rc genhtml_branch_coverage=1 00:22:26.044 --rc genhtml_function_coverage=1 00:22:26.044 --rc genhtml_legend=1 00:22:26.044 --rc geninfo_all_blocks=1 00:22:26.044 --rc geninfo_unexecuted_blocks=1 00:22:26.044 00:22:26.044 ' 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.044 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:26.044 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:26.045 Cannot find device "nvmf_init_br" 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:26.045 Cannot find device "nvmf_init_br2" 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:26.045 Cannot find device "nvmf_tgt_br" 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:26.045 Cannot find device "nvmf_tgt_br2" 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:26.045 Cannot find device "nvmf_init_br" 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:26.045 Cannot find device "nvmf_init_br2" 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:26.045 Cannot find device "nvmf_tgt_br" 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:26.045 Cannot find device "nvmf_tgt_br2" 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:26.045 Cannot find device "nvmf_br" 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:26.045 Cannot find device "nvmf_init_if" 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:26.045 Cannot find device "nvmf_init_if2" 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:26.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:26.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:26.045 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:26.304 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:26.304 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:22:26.304 00:22:26.304 --- 10.0.0.3 ping statistics --- 00:22:26.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.304 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:26.304 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:26.304 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:22:26.304 00:22:26.304 --- 10.0.0.4 ping statistics --- 00:22:26.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.304 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:26.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:22:26.304 00:22:26.304 --- 10.0.0.1 ping statistics --- 00:22:26.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.304 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:26.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:22:26.304 00:22:26.304 --- 10.0.0.2 ping statistics --- 00:22:26.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.304 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:22:26.304 10:45:51 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:26.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:27.187 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:27.187 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84611 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84611 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 84611 ']' 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:27.187 10:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:27.187 [2024-11-15 10:45:52.676617] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:22:27.187 [2024-11-15 10:45:52.676897] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.445 [2024-11-15 10:45:52.829466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.445 [2024-11-15 10:45:52.896967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.445 [2024-11-15 10:45:52.897040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.445 [2024-11-15 10:45:52.897062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.445 [2024-11-15 10:45:52.897079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.445 [2024-11-15 10:45:52.897093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.445 [2024-11-15 10:45:52.898555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.445 [2024-11-15 10:45:52.898618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.445 [2024-11-15 10:45:52.898758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.445 [2024-11-15 10:45:52.898765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.703 [2024-11-15 10:45:52.955805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:22:27.703 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:27.704 10:45:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:27.704 ************************************ 00:22:27.704 START TEST spdk_target_abort 00:22:27.704 ************************************ 00:22:27.704 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:22:27.704 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:27.704 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:22:27.704 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.704 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:27.704 spdk_targetn1 00:22:27.704 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.704 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:27.704 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.704 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:27.704 [2024-11-15 10:45:53.187629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:27.963 [2024-11-15 10:45:53.226946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:27.963 10:45:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:31.247 Initializing NVMe Controllers 00:22:31.247 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:31.247 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:31.247 Initialization complete. Launching workers. 00:22:31.247 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10872, failed: 0 00:22:31.247 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1036, failed to submit 9836 00:22:31.247 success 748, unsuccessful 288, failed 0 00:22:31.247 10:45:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:31.247 10:45:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:34.530 Initializing NVMe Controllers 00:22:34.530 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:34.530 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:34.530 Initialization complete. Launching workers. 00:22:34.530 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8880, failed: 0 00:22:34.530 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1143, failed to submit 7737 00:22:34.530 success 387, unsuccessful 756, failed 0 00:22:34.530 10:45:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:34.530 10:45:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:37.813 Initializing NVMe Controllers 00:22:37.813 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:37.813 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:37.813 Initialization complete. Launching workers. 00:22:37.813 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31342, failed: 0 00:22:37.813 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2317, failed to submit 29025 00:22:37.813 success 455, unsuccessful 1862, failed 0 00:22:37.813 10:46:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:22:37.813 10:46:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.813 10:46:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:37.813 10:46:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.813 10:46:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:22:37.813 10:46:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.813 10:46:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:39.188 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.188 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84611 00:22:39.188 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 84611 ']' 00:22:39.188 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 84611 00:22:39.188 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:22:39.188 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:39.188 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84611 00:22:39.188 killing process with pid 84611 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84611' 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 84611 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 84611 00:22:39.189 ************************************ 00:22:39.189 END TEST spdk_target_abort 00:22:39.189 ************************************ 00:22:39.189 00:22:39.189 real 0m11.491s 00:22:39.189 user 0m42.952s 00:22:39.189 sys 0m2.306s 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:39.189 10:46:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:22:39.189 10:46:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:39.189 10:46:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:39.189 10:46:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:39.189 ************************************ 00:22:39.189 START TEST kernel_target_abort 00:22:39.189 ************************************ 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:39.189 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:39.447 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:39.447 10:46:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:39.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:39.705 Waiting for block devices as requested 00:22:39.706 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:39.964 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:39.964 No valid GPT data, bailing 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:39.964 No valid GPT data, bailing 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:22:39.964 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:40.223 No valid GPT data, bailing 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:40.223 No valid GPT data, bailing 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f --hostid=50e4d619-cecf-4dd2-989d-1336dee31d8f -a 10.0.0.1 -t tcp -s 4420 00:22:40.223 00:22:40.223 Discovery Log Number of Records 2, Generation counter 2 00:22:40.223 =====Discovery Log Entry 0====== 00:22:40.223 trtype: tcp 00:22:40.223 adrfam: ipv4 00:22:40.223 subtype: current discovery subsystem 00:22:40.223 treq: not specified, sq flow control disable supported 00:22:40.223 portid: 1 00:22:40.223 trsvcid: 4420 00:22:40.223 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:40.223 traddr: 10.0.0.1 00:22:40.223 eflags: none 00:22:40.223 sectype: none 00:22:40.223 =====Discovery Log Entry 1====== 00:22:40.223 trtype: tcp 00:22:40.223 adrfam: ipv4 00:22:40.223 subtype: nvme subsystem 00:22:40.223 treq: not specified, sq flow control disable supported 00:22:40.223 portid: 1 00:22:40.223 trsvcid: 4420 00:22:40.223 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:40.223 traddr: 10.0.0.1 00:22:40.223 eflags: none 00:22:40.223 sectype: none 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:40.223 10:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:43.503 Initializing NVMe Controllers 00:22:43.503 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:43.503 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:43.503 Initialization complete. Launching workers. 00:22:43.503 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35611, failed: 0 00:22:43.503 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35611, failed to submit 0 00:22:43.503 success 0, unsuccessful 35611, failed 0 00:22:43.503 10:46:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:43.504 10:46:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:46.788 Initializing NVMe Controllers 00:22:46.788 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:46.788 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:46.788 Initialization complete. Launching workers. 00:22:46.788 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65563, failed: 0 00:22:46.788 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28594, failed to submit 36969 00:22:46.788 success 0, unsuccessful 28594, failed 0 00:22:46.788 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:46.788 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:50.076 Initializing NVMe Controllers 00:22:50.076 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:50.076 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:50.076 Initialization complete. Launching workers. 00:22:50.076 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77560, failed: 0 00:22:50.076 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19440, failed to submit 58120 00:22:50.076 success 0, unsuccessful 19440, failed 0 00:22:50.076 10:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:22:50.076 10:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:50.076 10:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:22:50.076 10:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:50.076 10:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:50.076 10:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:50.076 10:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:50.076 10:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:50.076 10:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:50.076 10:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:50.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:53.924 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:53.924 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:53.924 00:22:53.924 real 0m14.379s 00:22:53.924 user 0m6.370s 00:22:53.924 sys 0m5.461s 00:22:53.924 10:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:53.924 10:46:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:53.924 ************************************ 00:22:53.924 END TEST kernel_target_abort 00:22:53.924 ************************************ 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:53.924 rmmod nvme_tcp 00:22:53.924 rmmod nvme_fabrics 00:22:53.924 rmmod nvme_keyring 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84611 ']' 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84611 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 84611 ']' 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 84611 00:22:53.924 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (84611) - No such process 00:22:53.924 Process with pid 84611 is not found 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 84611 is not found' 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:22:53.924 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:54.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:54.182 Waiting for block devices as requested 00:22:54.182 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:54.440 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:54.440 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:54.698 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:54.699 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:54.699 10:46:19 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:54.699 10:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:54.699 10:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.699 10:46:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:54.699 10:46:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.699 10:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:22:54.699 ************************************ 00:22:54.699 END TEST nvmf_abort_qd_sizes 00:22:54.699 ************************************ 00:22:54.699 00:22:54.699 real 0m28.956s 00:22:54.699 user 0m50.471s 00:22:54.699 sys 0m9.218s 00:22:54.699 10:46:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:54.699 10:46:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:54.699 10:46:20 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:54.699 10:46:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:54.699 10:46:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:54.699 10:46:20 -- common/autotest_common.sh@10 -- # set +x 00:22:54.699 ************************************ 00:22:54.699 START TEST keyring_file 00:22:54.699 ************************************ 00:22:54.699 10:46:20 keyring_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:54.699 * Looking for test storage... 00:22:54.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:54.699 10:46:20 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:54.699 10:46:20 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:22:54.699 10:46:20 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:54.958 10:46:20 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:54.958 10:46:20 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.958 10:46:20 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.958 10:46:20 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@345 -- # : 1 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@353 -- # local d=1 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@355 -- # echo 1 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@353 -- # local d=2 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@355 -- # echo 2 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@368 -- # return 0 00:22:54.959 10:46:20 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.959 10:46:20 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:54.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.959 --rc genhtml_branch_coverage=1 00:22:54.959 --rc genhtml_function_coverage=1 00:22:54.959 --rc genhtml_legend=1 00:22:54.959 --rc geninfo_all_blocks=1 00:22:54.959 --rc geninfo_unexecuted_blocks=1 00:22:54.959 00:22:54.959 ' 00:22:54.959 10:46:20 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:54.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.959 --rc genhtml_branch_coverage=1 00:22:54.959 --rc genhtml_function_coverage=1 00:22:54.959 --rc genhtml_legend=1 00:22:54.959 --rc geninfo_all_blocks=1 00:22:54.959 --rc geninfo_unexecuted_blocks=1 00:22:54.959 00:22:54.959 ' 00:22:54.959 10:46:20 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:54.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.959 --rc genhtml_branch_coverage=1 00:22:54.959 --rc genhtml_function_coverage=1 00:22:54.959 --rc genhtml_legend=1 00:22:54.959 --rc geninfo_all_blocks=1 00:22:54.959 --rc geninfo_unexecuted_blocks=1 00:22:54.959 00:22:54.959 ' 00:22:54.959 10:46:20 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:54.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.959 --rc genhtml_branch_coverage=1 00:22:54.959 --rc genhtml_function_coverage=1 00:22:54.959 --rc genhtml_legend=1 00:22:54.959 --rc geninfo_all_blocks=1 00:22:54.959 --rc geninfo_unexecuted_blocks=1 00:22:54.959 00:22:54.959 ' 00:22:54.959 10:46:20 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:54.959 10:46:20 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.959 10:46:20 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.959 10:46:20 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.959 10:46:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.959 10:46:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.959 10:46:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.959 10:46:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:22:54.960 10:46:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@51 -- # : 0 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:54.960 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:54.960 10:46:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:54.960 10:46:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:54.960 10:46:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:22:54.960 10:46:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:22:54.960 10:46:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:22:54.960 10:46:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.j8z9DGWq5r 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.j8z9DGWq5r 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.j8z9DGWq5r 00:22:54.960 10:46:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.j8z9DGWq5r 00:22:54.960 10:46:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.whpyu34C20 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:54.960 10:46:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.whpyu34C20 00:22:54.960 10:46:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.whpyu34C20 00:22:54.960 10:46:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.whpyu34C20 00:22:54.960 10:46:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=85518 00:22:54.960 10:46:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85518 00:22:54.960 10:46:20 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:54.960 10:46:20 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85518 ']' 00:22:54.960 10:46:20 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.960 10:46:20 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:54.960 10:46:20 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.960 10:46:20 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:54.960 10:46:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:55.218 [2024-11-15 10:46:20.509891] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:22:55.218 [2024-11-15 10:46:20.509996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85518 ] 00:22:55.218 [2024-11-15 10:46:20.661505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.477 [2024-11-15 10:46:20.733102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.477 [2024-11-15 10:46:20.810498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:56.043 10:46:21 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:56.043 10:46:21 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:22:56.043 10:46:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:22:56.043 10:46:21 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.043 10:46:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:56.043 [2024-11-15 10:46:21.519274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.043 null0 00:22:56.301 [2024-11-15 10:46:21.551225] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:56.302 [2024-11-15 10:46:21.551398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.302 10:46:21 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:56.302 [2024-11-15 10:46:21.579216] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:22:56.302 request: 00:22:56.302 { 00:22:56.302 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:22:56.302 "secure_channel": false, 00:22:56.302 "listen_address": { 00:22:56.302 "trtype": "tcp", 00:22:56.302 "traddr": "127.0.0.1", 00:22:56.302 "trsvcid": "4420" 00:22:56.302 }, 00:22:56.302 "method": "nvmf_subsystem_add_listener", 00:22:56.302 "req_id": 1 00:22:56.302 } 00:22:56.302 Got JSON-RPC error response 00:22:56.302 response: 00:22:56.302 { 00:22:56.302 "code": -32602, 00:22:56.302 "message": "Invalid parameters" 00:22:56.302 } 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:56.302 10:46:21 keyring_file -- keyring/file.sh@47 -- # bperfpid=85535 00:22:56.302 10:46:21 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85535 /var/tmp/bperf.sock 00:22:56.302 10:46:21 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85535 ']' 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:56.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:56.302 10:46:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:56.302 [2024-11-15 10:46:21.644117] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:22:56.302 [2024-11-15 10:46:21.644215] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85535 ] 00:22:56.302 [2024-11-15 10:46:21.794955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.560 [2024-11-15 10:46:21.868591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.560 [2024-11-15 10:46:21.927323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:57.494 10:46:22 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:57.494 10:46:22 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:22:57.495 10:46:22 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j8z9DGWq5r 00:22:57.495 10:46:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j8z9DGWq5r 00:22:57.495 10:46:22 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.whpyu34C20 00:22:57.495 10:46:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.whpyu34C20 00:22:58.062 10:46:23 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:22:58.062 10:46:23 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:22:58.062 10:46:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:58.062 10:46:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:58.062 10:46:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:58.062 10:46:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.j8z9DGWq5r == \/\t\m\p\/\t\m\p\.\j\8\z\9\D\G\W\q\5\r ]] 00:22:58.328 10:46:23 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:22:58.328 10:46:23 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:22:58.328 10:46:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:58.328 10:46:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:58.328 10:46:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:58.586 10:46:23 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.whpyu34C20 == \/\t\m\p\/\t\m\p\.\w\h\p\y\u\3\4\C\2\0 ]] 00:22:58.586 10:46:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:22:58.586 10:46:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:58.586 10:46:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:58.586 10:46:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:58.586 10:46:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:58.586 10:46:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:58.844 10:46:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:22:58.844 10:46:24 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:22:58.844 10:46:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:58.844 10:46:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:58.844 10:46:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:58.844 10:46:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:58.844 10:46:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:59.101 10:46:24 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:22:59.101 10:46:24 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:59.101 10:46:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:59.361 [2024-11-15 10:46:24.703035] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.361 nvme0n1 00:22:59.361 10:46:24 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:22:59.361 10:46:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:59.361 10:46:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:59.361 10:46:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:59.361 10:46:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:59.361 10:46:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:59.928 10:46:25 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:22:59.928 10:46:25 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:22:59.928 10:46:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:59.928 10:46:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:59.928 10:46:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:59.928 10:46:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:59.928 10:46:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:00.185 10:46:25 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:23:00.185 10:46:25 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:00.185 Running I/O for 1 seconds... 00:23:01.119 11035.00 IOPS, 43.11 MiB/s 00:23:01.119 Latency(us) 00:23:01.119 [2024-11-15T10:46:26.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.119 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:01.119 nvme0n1 : 1.01 11095.44 43.34 0.00 0.00 11505.45 4021.53 21686.46 00:23:01.119 [2024-11-15T10:46:26.617Z] =================================================================================================================== 00:23:01.119 [2024-11-15T10:46:26.617Z] Total : 11095.44 43.34 0.00 0.00 11505.45 4021.53 21686.46 00:23:01.119 { 00:23:01.119 "results": [ 00:23:01.119 { 00:23:01.119 "job": "nvme0n1", 00:23:01.119 "core_mask": "0x2", 00:23:01.119 "workload": "randrw", 00:23:01.120 "percentage": 50, 00:23:01.120 "status": "finished", 00:23:01.120 "queue_depth": 128, 00:23:01.120 "io_size": 4096, 00:23:01.120 "runtime": 1.006179, 00:23:01.120 "iops": 11095.441268402541, 00:23:01.120 "mibps": 43.341567454697426, 00:23:01.120 "io_failed": 0, 00:23:01.120 "io_timeout": 0, 00:23:01.120 "avg_latency_us": 11505.45333637341, 00:23:01.120 "min_latency_us": 4021.5272727272727, 00:23:01.120 "max_latency_us": 21686.458181818183 00:23:01.120 } 00:23:01.120 ], 00:23:01.120 "core_count": 1 00:23:01.120 } 00:23:01.120 10:46:26 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:01.120 10:46:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:01.377 10:46:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:23:01.377 10:46:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:01.377 10:46:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:01.377 10:46:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:01.377 10:46:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:01.377 10:46:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:01.941 10:46:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:01.941 10:46:27 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:23:01.941 10:46:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:01.941 10:46:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:01.941 10:46:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:01.941 10:46:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:01.941 10:46:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:02.199 10:46:27 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:23:02.199 10:46:27 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:02.199 10:46:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:23:02.199 10:46:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:02.199 10:46:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:23:02.199 10:46:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:02.199 10:46:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:23:02.199 10:46:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:02.199 10:46:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:02.199 10:46:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:02.457 [2024-11-15 10:46:27.726944] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:02.457 [2024-11-15 10:46:27.727708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf235d0 (107): Transport endpoint is not connected 00:23:02.457 [2024-11-15 10:46:27.728696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf235d0 (9): Bad file descriptor 00:23:02.457 [2024-11-15 10:46:27.729693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:23:02.457 [2024-11-15 10:46:27.729721] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:02.457 [2024-11-15 10:46:27.729733] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:23:02.457 [2024-11-15 10:46:27.729745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:23:02.457 request: 00:23:02.457 { 00:23:02.457 "name": "nvme0", 00:23:02.457 "trtype": "tcp", 00:23:02.457 "traddr": "127.0.0.1", 00:23:02.457 "adrfam": "ipv4", 00:23:02.457 "trsvcid": "4420", 00:23:02.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:02.457 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:02.457 "prchk_reftag": false, 00:23:02.457 "prchk_guard": false, 00:23:02.457 "hdgst": false, 00:23:02.457 "ddgst": false, 00:23:02.457 "psk": "key1", 00:23:02.457 "allow_unrecognized_csi": false, 00:23:02.457 "method": "bdev_nvme_attach_controller", 00:23:02.457 "req_id": 1 00:23:02.457 } 00:23:02.457 Got JSON-RPC error response 00:23:02.457 response: 00:23:02.457 { 00:23:02.457 "code": -5, 00:23:02.457 "message": "Input/output error" 00:23:02.457 } 00:23:02.457 10:46:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:23:02.457 10:46:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:02.457 10:46:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:02.457 10:46:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:02.457 10:46:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:23:02.457 10:46:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:02.458 10:46:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:02.458 10:46:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:02.458 10:46:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:02.458 10:46:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:02.780 10:46:28 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:02.780 10:46:28 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:23:02.780 10:46:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:02.780 10:46:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:02.780 10:46:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:02.780 10:46:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:02.780 10:46:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:03.053 10:46:28 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:23:03.053 10:46:28 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:23:03.053 10:46:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:03.310 10:46:28 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:23:03.310 10:46:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:03.568 10:46:28 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:23:03.568 10:46:28 keyring_file -- keyring/file.sh@78 -- # jq length 00:23:03.568 10:46:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:03.826 10:46:29 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:23:03.826 10:46:29 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.j8z9DGWq5r 00:23:03.826 10:46:29 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.j8z9DGWq5r 00:23:03.827 10:46:29 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:23:03.827 10:46:29 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.j8z9DGWq5r 00:23:03.827 10:46:29 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:23:03.827 10:46:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.827 10:46:29 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:23:03.827 10:46:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.827 10:46:29 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j8z9DGWq5r 00:23:03.827 10:46:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j8z9DGWq5r 00:23:04.085 [2024-11-15 10:46:29.523000] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.j8z9DGWq5r': 0100660 00:23:04.085 [2024-11-15 10:46:29.523054] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:04.085 request: 00:23:04.085 { 00:23:04.085 "name": "key0", 00:23:04.085 "path": "/tmp/tmp.j8z9DGWq5r", 00:23:04.085 "method": "keyring_file_add_key", 00:23:04.085 "req_id": 1 00:23:04.085 } 00:23:04.085 Got JSON-RPC error response 00:23:04.085 response: 00:23:04.085 { 00:23:04.085 "code": -1, 00:23:04.085 "message": "Operation not permitted" 00:23:04.085 } 00:23:04.085 10:46:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:23:04.085 10:46:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:04.085 10:46:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:04.085 10:46:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:04.085 10:46:29 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.j8z9DGWq5r 00:23:04.085 10:46:29 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j8z9DGWq5r 00:23:04.085 10:46:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j8z9DGWq5r 00:23:04.343 10:46:29 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.j8z9DGWq5r 00:23:04.343 10:46:29 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:23:04.343 10:46:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:04.343 10:46:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:04.343 10:46:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:04.343 10:46:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:04.343 10:46:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:04.601 10:46:30 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:23:04.601 10:46:30 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:04.601 10:46:30 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:23:04.601 10:46:30 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:04.601 10:46:30 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:23:04.601 10:46:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:04.601 10:46:30 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:23:04.601 10:46:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:04.601 10:46:30 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:04.601 10:46:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:04.858 [2024-11-15 10:46:30.325795] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.j8z9DGWq5r': No such file or directory 00:23:04.858 [2024-11-15 10:46:30.325846] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:04.858 [2024-11-15 10:46:30.325869] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:04.858 [2024-11-15 10:46:30.325879] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:23:04.858 [2024-11-15 10:46:30.325890] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:04.858 [2024-11-15 10:46:30.325899] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:04.858 request: 00:23:04.858 { 00:23:04.858 "name": "nvme0", 00:23:04.858 "trtype": "tcp", 00:23:04.858 "traddr": "127.0.0.1", 00:23:04.858 "adrfam": "ipv4", 00:23:04.858 "trsvcid": "4420", 00:23:04.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:04.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:04.858 "prchk_reftag": false, 00:23:04.858 "prchk_guard": false, 00:23:04.858 "hdgst": false, 00:23:04.858 "ddgst": false, 00:23:04.858 "psk": "key0", 00:23:04.858 "allow_unrecognized_csi": false, 00:23:04.858 "method": "bdev_nvme_attach_controller", 00:23:04.858 "req_id": 1 00:23:04.858 } 00:23:04.858 Got JSON-RPC error response 00:23:04.858 response: 00:23:04.858 { 00:23:04.858 "code": -19, 00:23:04.858 "message": "No such device" 00:23:04.858 } 00:23:05.116 10:46:30 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:23:05.116 10:46:30 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:05.116 10:46:30 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:05.116 10:46:30 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:05.116 10:46:30 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:23:05.116 10:46:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:05.373 10:46:30 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:05.373 10:46:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:05.373 10:46:30 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:05.373 10:46:30 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:05.373 10:46:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:05.373 10:46:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:05.373 10:46:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gl42uELkvN 00:23:05.373 10:46:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:05.373 10:46:30 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:05.373 10:46:30 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:23:05.373 10:46:30 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:05.373 10:46:30 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:05.373 10:46:30 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:23:05.373 10:46:30 keyring_file -- nvmf/common.sh@733 -- # python - 00:23:05.373 10:46:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gl42uELkvN 00:23:05.373 10:46:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gl42uELkvN 00:23:05.373 10:46:30 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.gl42uELkvN 00:23:05.373 10:46:30 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gl42uELkvN 00:23:05.373 10:46:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gl42uELkvN 00:23:05.630 10:46:30 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:05.630 10:46:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:05.889 nvme0n1 00:23:05.889 10:46:31 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:23:05.889 10:46:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:05.889 10:46:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:05.889 10:46:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:05.889 10:46:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:05.889 10:46:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:06.148 10:46:31 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:23:06.148 10:46:31 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:23:06.148 10:46:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:06.405 10:46:31 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:23:06.405 10:46:31 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:23:06.405 10:46:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:06.405 10:46:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:06.405 10:46:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:06.969 10:46:32 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:23:06.969 10:46:32 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:23:06.969 10:46:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:06.969 10:46:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:06.969 10:46:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:06.969 10:46:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:06.969 10:46:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:07.227 10:46:32 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:23:07.227 10:46:32 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:07.227 10:46:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:07.489 10:46:32 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:23:07.489 10:46:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:07.489 10:46:32 keyring_file -- keyring/file.sh@105 -- # jq length 00:23:07.753 10:46:33 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:23:07.753 10:46:33 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gl42uELkvN 00:23:07.753 10:46:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gl42uELkvN 00:23:08.011 10:46:33 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.whpyu34C20 00:23:08.011 10:46:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.whpyu34C20 00:23:08.268 10:46:33 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:08.268 10:46:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:08.526 nvme0n1 00:23:08.526 10:46:33 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:23:08.526 10:46:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:09.090 10:46:34 keyring_file -- keyring/file.sh@113 -- # config='{ 00:23:09.090 "subsystems": [ 00:23:09.090 { 00:23:09.090 "subsystem": "keyring", 00:23:09.090 "config": [ 00:23:09.090 { 00:23:09.090 "method": "keyring_file_add_key", 00:23:09.090 "params": { 00:23:09.090 "name": "key0", 00:23:09.090 "path": "/tmp/tmp.gl42uELkvN" 00:23:09.090 } 00:23:09.090 }, 00:23:09.090 { 00:23:09.090 "method": "keyring_file_add_key", 00:23:09.090 "params": { 00:23:09.090 "name": "key1", 00:23:09.090 "path": "/tmp/tmp.whpyu34C20" 00:23:09.090 } 00:23:09.090 } 00:23:09.090 ] 00:23:09.090 }, 00:23:09.090 { 00:23:09.090 "subsystem": "iobuf", 00:23:09.090 "config": [ 00:23:09.090 { 00:23:09.090 "method": "iobuf_set_options", 00:23:09.090 "params": { 00:23:09.090 "small_pool_count": 8192, 00:23:09.090 "large_pool_count": 1024, 00:23:09.090 "small_bufsize": 8192, 00:23:09.090 "large_bufsize": 135168, 00:23:09.090 "enable_numa": false 00:23:09.090 } 00:23:09.090 } 00:23:09.090 ] 00:23:09.090 }, 00:23:09.090 { 00:23:09.090 "subsystem": "sock", 00:23:09.090 "config": [ 00:23:09.091 { 00:23:09.091 "method": "sock_set_default_impl", 00:23:09.091 "params": { 00:23:09.091 "impl_name": "uring" 00:23:09.091 } 00:23:09.091 }, 00:23:09.091 { 00:23:09.091 "method": "sock_impl_set_options", 00:23:09.091 "params": { 00:23:09.091 "impl_name": "ssl", 00:23:09.091 "recv_buf_size": 4096, 00:23:09.091 "send_buf_size": 4096, 00:23:09.091 "enable_recv_pipe": true, 00:23:09.091 "enable_quickack": false, 00:23:09.091 "enable_placement_id": 0, 00:23:09.091 "enable_zerocopy_send_server": true, 00:23:09.091 "enable_zerocopy_send_client": false, 00:23:09.091 "zerocopy_threshold": 0, 00:23:09.091 "tls_version": 0, 00:23:09.091 "enable_ktls": false 00:23:09.091 } 00:23:09.091 }, 00:23:09.091 { 00:23:09.091 "method": "sock_impl_set_options", 00:23:09.091 "params": { 00:23:09.091 "impl_name": "posix", 00:23:09.091 "recv_buf_size": 2097152, 00:23:09.091 "send_buf_size": 2097152, 00:23:09.091 "enable_recv_pipe": true, 00:23:09.091 "enable_quickack": false, 00:23:09.091 "enable_placement_id": 0, 00:23:09.091 "enable_zerocopy_send_server": true, 00:23:09.091 "enable_zerocopy_send_client": false, 00:23:09.091 "zerocopy_threshold": 0, 00:23:09.091 "tls_version": 0, 00:23:09.091 "enable_ktls": false 00:23:09.091 } 00:23:09.091 }, 00:23:09.091 { 00:23:09.091 "method": "sock_impl_set_options", 00:23:09.091 "params": { 00:23:09.091 "impl_name": "uring", 00:23:09.091 "recv_buf_size": 2097152, 00:23:09.091 "send_buf_size": 2097152, 00:23:09.091 "enable_recv_pipe": true, 00:23:09.091 "enable_quickack": false, 00:23:09.091 "enable_placement_id": 0, 00:23:09.091 "enable_zerocopy_send_server": false, 00:23:09.091 "enable_zerocopy_send_client": false, 00:23:09.091 "zerocopy_threshold": 0, 00:23:09.091 "tls_version": 0, 00:23:09.091 "enable_ktls": false 00:23:09.091 } 00:23:09.091 } 00:23:09.091 ] 00:23:09.091 }, 00:23:09.091 { 00:23:09.091 "subsystem": "vmd", 00:23:09.091 "config": [] 00:23:09.091 }, 00:23:09.091 { 00:23:09.091 "subsystem": "accel", 00:23:09.091 "config": [ 00:23:09.091 { 00:23:09.091 "method": "accel_set_options", 00:23:09.091 "params": { 00:23:09.091 "small_cache_size": 128, 00:23:09.091 "large_cache_size": 16, 00:23:09.091 "task_count": 2048, 00:23:09.091 "sequence_count": 2048, 00:23:09.091 "buf_count": 2048 00:23:09.091 } 00:23:09.091 } 00:23:09.091 ] 00:23:09.091 }, 00:23:09.091 { 00:23:09.091 "subsystem": "bdev", 00:23:09.091 "config": [ 00:23:09.091 { 00:23:09.091 "method": "bdev_set_options", 00:23:09.091 "params": { 00:23:09.091 "bdev_io_pool_size": 65535, 00:23:09.091 "bdev_io_cache_size": 256, 00:23:09.091 "bdev_auto_examine": true, 00:23:09.091 "iobuf_small_cache_size": 128, 00:23:09.091 "iobuf_large_cache_size": 16 00:23:09.091 } 00:23:09.091 }, 00:23:09.091 { 00:23:09.091 "method": "bdev_raid_set_options", 00:23:09.091 "params": { 00:23:09.091 "process_window_size_kb": 1024, 00:23:09.091 "process_max_bandwidth_mb_sec": 0 00:23:09.091 } 00:23:09.091 }, 00:23:09.091 { 00:23:09.091 "method": "bdev_iscsi_set_options", 00:23:09.091 "params": { 00:23:09.091 "timeout_sec": 30 00:23:09.091 } 00:23:09.091 }, 00:23:09.091 { 00:23:09.091 "method": "bdev_nvme_set_options", 00:23:09.091 "params": { 00:23:09.091 "action_on_timeout": "none", 00:23:09.091 "timeout_us": 0, 00:23:09.091 "timeout_admin_us": 0, 00:23:09.091 "keep_alive_timeout_ms": 10000, 00:23:09.091 "arbitration_burst": 0, 00:23:09.091 "low_priority_weight": 0, 00:23:09.091 "medium_priority_weight": 0, 00:23:09.091 "high_priority_weight": 0, 00:23:09.091 "nvme_adminq_poll_period_us": 10000, 00:23:09.091 "nvme_ioq_poll_period_us": 0, 00:23:09.091 "io_queue_requests": 512, 00:23:09.091 "delay_cmd_submit": true, 00:23:09.091 "transport_retry_count": 4, 00:23:09.091 "bdev_retry_count": 3, 00:23:09.091 "transport_ack_timeout": 0, 00:23:09.091 "ctrlr_loss_timeout_sec": 0, 00:23:09.091 "reconnect_delay_sec": 0, 00:23:09.091 "fast_io_fail_timeout_sec": 0, 00:23:09.091 "disable_auto_failback": false, 00:23:09.091 "generate_uuids": false, 00:23:09.091 "transport_tos": 0, 00:23:09.091 "nvme_error_stat": false, 00:23:09.091 "rdma_srq_size": 0, 00:23:09.091 "io_path_stat": false, 00:23:09.091 "allow_accel_sequence": false, 00:23:09.091 "rdma_max_cq_size": 0, 00:23:09.091 "rdma_cm_event_timeout_ms": 0, 00:23:09.091 "dhchap_digests": [ 00:23:09.091 "sha256", 00:23:09.091 "sha384", 00:23:09.091 "sha512" 00:23:09.091 ], 00:23:09.091 "dhchap_dhgroups": [ 00:23:09.091 "null", 00:23:09.091 "ffdhe2048", 00:23:09.091 "ffdhe3072", 00:23:09.091 "ffdhe4096", 00:23:09.091 "ffdhe6144", 00:23:09.091 "ffdhe8192" 00:23:09.091 ] 00:23:09.091 } 00:23:09.091 }, 00:23:09.091 { 00:23:09.091 "method": "bdev_nvme_attach_controller", 00:23:09.091 "params": { 00:23:09.091 "name": "nvme0", 00:23:09.091 "trtype": "TCP", 00:23:09.091 "adrfam": "IPv4", 00:23:09.091 "traddr": "127.0.0.1", 00:23:09.091 "trsvcid": "4420", 00:23:09.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:09.091 "prchk_reftag": false, 00:23:09.091 "prchk_guard": false, 00:23:09.091 "ctrlr_loss_timeout_sec": 0, 00:23:09.091 "reconnect_delay_sec": 0, 00:23:09.091 "fast_io_fail_timeout_sec": 0, 00:23:09.091 "psk": "key0", 00:23:09.091 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:09.091 "hdgst": false, 00:23:09.091 "ddgst": false, 00:23:09.091 "multipath": "multipath" 00:23:09.091 } 00:23:09.091 }, 00:23:09.091 { 00:23:09.091 "method": "bdev_nvme_set_hotplug", 00:23:09.091 "params": { 00:23:09.091 "period_us": 100000, 00:23:09.091 "enable": false 00:23:09.091 } 00:23:09.091 }, 00:23:09.091 { 00:23:09.091 "method": "bdev_wait_for_examine" 00:23:09.091 } 00:23:09.091 ] 00:23:09.091 }, 00:23:09.091 { 00:23:09.091 "subsystem": "nbd", 00:23:09.091 "config": [] 00:23:09.091 } 00:23:09.091 ] 00:23:09.091 }' 00:23:09.091 10:46:34 keyring_file -- keyring/file.sh@115 -- # killprocess 85535 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85535 ']' 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85535 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@957 -- # uname 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85535 00:23:09.091 killing process with pid 85535 00:23:09.091 Received shutdown signal, test time was about 1.000000 seconds 00:23:09.091 00:23:09.091 Latency(us) 00:23:09.091 [2024-11-15T10:46:34.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.091 [2024-11-15T10:46:34.589Z] =================================================================================================================== 00:23:09.091 [2024-11-15T10:46:34.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85535' 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@971 -- # kill 85535 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@976 -- # wait 85535 00:23:09.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:09.091 10:46:34 keyring_file -- keyring/file.sh@118 -- # bperfpid=85798 00:23:09.091 10:46:34 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85798 /var/tmp/bperf.sock 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85798 ']' 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:09.091 10:46:34 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:09.091 10:46:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:09.091 10:46:34 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:23:09.092 "subsystems": [ 00:23:09.092 { 00:23:09.092 "subsystem": "keyring", 00:23:09.092 "config": [ 00:23:09.092 { 00:23:09.092 "method": "keyring_file_add_key", 00:23:09.092 "params": { 00:23:09.092 "name": "key0", 00:23:09.092 "path": "/tmp/tmp.gl42uELkvN" 00:23:09.092 } 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "method": "keyring_file_add_key", 00:23:09.092 "params": { 00:23:09.092 "name": "key1", 00:23:09.092 "path": "/tmp/tmp.whpyu34C20" 00:23:09.092 } 00:23:09.092 } 00:23:09.092 ] 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "subsystem": "iobuf", 00:23:09.092 "config": [ 00:23:09.092 { 00:23:09.092 "method": "iobuf_set_options", 00:23:09.092 "params": { 00:23:09.092 "small_pool_count": 8192, 00:23:09.092 "large_pool_count": 1024, 00:23:09.092 "small_bufsize": 8192, 00:23:09.092 "large_bufsize": 135168, 00:23:09.092 "enable_numa": false 00:23:09.092 } 00:23:09.092 } 00:23:09.092 ] 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "subsystem": "sock", 00:23:09.092 "config": [ 00:23:09.092 { 00:23:09.092 "method": "sock_set_default_impl", 00:23:09.092 "params": { 00:23:09.092 "impl_name": "uring" 00:23:09.092 } 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "method": "sock_impl_set_options", 00:23:09.092 "params": { 00:23:09.092 "impl_name": "ssl", 00:23:09.092 "recv_buf_size": 4096, 00:23:09.092 "send_buf_size": 4096, 00:23:09.092 "enable_recv_pipe": true, 00:23:09.092 "enable_quickack": false, 00:23:09.092 "enable_placement_id": 0, 00:23:09.092 "enable_zerocopy_send_server": true, 00:23:09.092 "enable_zerocopy_send_client": false, 00:23:09.092 "zerocopy_threshold": 0, 00:23:09.092 "tls_version": 0, 00:23:09.092 "enable_ktls": false 00:23:09.092 } 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "method": "sock_impl_set_options", 00:23:09.092 "params": { 00:23:09.092 "impl_name": "posix", 00:23:09.092 "recv_buf_size": 2097152, 00:23:09.092 "send_buf_size": 2097152, 00:23:09.092 "enable_recv_pipe": true, 00:23:09.092 "enable_quickack": false, 00:23:09.092 "enable_placement_id": 0, 00:23:09.092 "enable_zerocopy_send_server": true, 00:23:09.092 "enable_zerocopy_send_client": false, 00:23:09.092 "zerocopy_threshold": 0, 00:23:09.092 "tls_version": 0, 00:23:09.092 "enable_ktls": false 00:23:09.092 } 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "method": "sock_impl_set_options", 00:23:09.092 "params": { 00:23:09.092 "impl_name": "uring", 00:23:09.092 "recv_buf_size": 2097152, 00:23:09.092 "send_buf_size": 2097152, 00:23:09.092 "enable_recv_pipe": true, 00:23:09.092 "enable_quickack": false, 00:23:09.092 "enable_placement_id": 0, 00:23:09.092 "enable_zerocopy_send_server": false, 00:23:09.092 "enable_zerocopy_send_client": false, 00:23:09.092 "zerocopy_threshold": 0, 00:23:09.092 "tls_version": 0, 00:23:09.092 "enable_ktls": false 00:23:09.092 } 00:23:09.092 } 00:23:09.092 ] 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "subsystem": "vmd", 00:23:09.092 "config": [] 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "subsystem": "accel", 00:23:09.092 "config": [ 00:23:09.092 { 00:23:09.092 "method": "accel_set_options", 00:23:09.092 "params": { 00:23:09.092 "small_cache_size": 128, 00:23:09.092 "large_cache_size": 16, 00:23:09.092 "task_count": 2048, 00:23:09.092 "sequence_count": 2048, 00:23:09.092 "buf_count": 2048 00:23:09.092 } 00:23:09.092 } 00:23:09.092 ] 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "subsystem": "bdev", 00:23:09.092 "config": [ 00:23:09.092 { 00:23:09.092 "method": "bdev_set_options", 00:23:09.092 "params": { 00:23:09.092 "bdev_io_pool_size": 65535, 00:23:09.092 "bdev_io_cache_size": 256, 00:23:09.092 "bdev_auto_examine": true, 00:23:09.092 "iobuf_small_cache_size": 128, 00:23:09.092 "iobuf_large_cache_size": 16 00:23:09.092 } 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "method": "bdev_raid_set_options", 00:23:09.092 "params": { 00:23:09.092 "process_window_size_kb": 1024, 00:23:09.092 "process_max_bandwidth_mb_sec": 0 00:23:09.092 } 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "method": "bdev_iscsi_set_options", 00:23:09.092 "params": { 00:23:09.092 "timeout_sec": 30 00:23:09.092 } 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "method": "bdev_nvme_set_options", 00:23:09.092 "params": { 00:23:09.092 "action_on_timeout": "none", 00:23:09.092 "timeout_us": 0, 00:23:09.092 "timeout_admin_us": 0, 00:23:09.092 "keep_alive_timeout_ms": 10000, 00:23:09.092 "arbitration_burst": 0, 00:23:09.092 "low_priority_weight": 0, 00:23:09.092 "medium_priority_weight": 0, 00:23:09.092 "high_priority_weight": 0, 00:23:09.092 "nvme_adminq_poll_period_us": 10000, 00:23:09.092 "nvme_ioq_poll_period_us": 0, 00:23:09.092 "io_queue_requests": 512, 00:23:09.092 "delay_cmd_submit": true, 00:23:09.092 "transport_retry_count": 4, 00:23:09.092 "bdev_retry_count": 3, 00:23:09.092 "transport_ack_timeout": 0, 00:23:09.092 "ctrlr_loss_timeout_sec": 0, 00:23:09.092 "reconnect_delay_sec": 0, 00:23:09.092 "fast_io_fail_timeout_sec": 0, 00:23:09.092 "disable_auto_failback": false, 00:23:09.092 "generate_uuids": false, 00:23:09.092 "transport_tos": 0, 00:23:09.092 "nvme_error_stat": false, 00:23:09.092 "rdma_srq_size": 0, 00:23:09.092 "io_path_stat": false, 00:23:09.092 "allow_accel_sequence": false, 00:23:09.092 "rdma_max_cq_size": 0, 00:23:09.092 "rdma_cm_event_timeout_ms": 0, 00:23:09.092 "dhchap_digests": [ 00:23:09.092 "sha256", 00:23:09.092 "sha384", 00:23:09.092 "sha512" 00:23:09.092 ], 00:23:09.092 "dhchap_dhgroups": [ 00:23:09.092 "null", 00:23:09.092 "ffdhe2048", 00:23:09.092 "ffdhe3072", 00:23:09.092 "ffdhe4096", 00:23:09.092 "ffdhe6144", 00:23:09.092 "ffdhe8192" 00:23:09.092 ] 00:23:09.092 } 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "method": "bdev_nvme_attach_controller", 00:23:09.092 "params": { 00:23:09.092 "name": "nvme0", 00:23:09.092 "trtype": "TCP", 00:23:09.092 "adrfam": "IPv4", 00:23:09.092 "traddr": "127.0.0.1", 00:23:09.092 "trsvcid": "4420", 00:23:09.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:09.092 "prchk_reftag": false, 00:23:09.092 "prchk_guard": false, 00:23:09.092 "ctrlr_loss_timeout_sec": 0, 00:23:09.092 "reconnect_delay_sec": 0, 00:23:09.092 "fast_io_fail_timeout_sec": 0, 00:23:09.092 "psk": "key0", 00:23:09.092 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:09.092 "hdgst": false, 00:23:09.092 "ddgst": false, 00:23:09.092 "multipath": "multipath" 00:23:09.092 } 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "method": "bdev_nvme_set_hotplug", 00:23:09.092 "params": { 00:23:09.092 "period_us": 100000, 00:23:09.092 "enable": false 00:23:09.092 } 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "method": "bdev_wait_for_examine" 00:23:09.092 } 00:23:09.092 ] 00:23:09.092 }, 00:23:09.092 { 00:23:09.092 "subsystem": "nbd", 00:23:09.092 "config": [] 00:23:09.092 } 00:23:09.092 ] 00:23:09.092 }' 00:23:09.350 [2024-11-15 10:46:34.600815] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:23:09.350 [2024-11-15 10:46:34.600904] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85798 ] 00:23:09.350 [2024-11-15 10:46:34.744839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.350 [2024-11-15 10:46:34.805176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.607 [2024-11-15 10:46:34.940500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:09.607 [2024-11-15 10:46:35.000357] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.172 10:46:35 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:10.172 10:46:35 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:23:10.172 10:46:35 keyring_file -- keyring/file.sh@121 -- # jq length 00:23:10.172 10:46:35 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:23:10.172 10:46:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:10.738 10:46:35 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:10.738 10:46:35 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:23:10.738 10:46:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:10.738 10:46:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:10.738 10:46:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:10.738 10:46:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:10.738 10:46:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:10.996 10:46:36 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:23:10.996 10:46:36 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:23:10.996 10:46:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:10.996 10:46:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:10.996 10:46:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:10.996 10:46:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:10.996 10:46:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:11.255 10:46:36 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:23:11.255 10:46:36 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:23:11.255 10:46:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:11.255 10:46:36 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:23:11.514 10:46:36 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:23:11.514 10:46:36 keyring_file -- keyring/file.sh@1 -- # cleanup 00:23:11.514 10:46:36 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.gl42uELkvN /tmp/tmp.whpyu34C20 00:23:11.514 10:46:36 keyring_file -- keyring/file.sh@20 -- # killprocess 85798 00:23:11.514 10:46:36 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85798 ']' 00:23:11.514 10:46:36 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85798 00:23:11.514 10:46:36 keyring_file -- common/autotest_common.sh@957 -- # uname 00:23:11.514 10:46:36 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:11.514 10:46:36 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85798 00:23:11.514 10:46:36 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:11.514 killing process with pid 85798 00:23:11.514 Received shutdown signal, test time was about 1.000000 seconds 00:23:11.514 00:23:11.514 Latency(us) 00:23:11.514 [2024-11-15T10:46:37.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.514 [2024-11-15T10:46:37.012Z] =================================================================================================================== 00:23:11.514 [2024-11-15T10:46:37.012Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.514 10:46:36 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:11.514 10:46:36 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85798' 00:23:11.514 10:46:36 keyring_file -- common/autotest_common.sh@971 -- # kill 85798 00:23:11.514 10:46:36 keyring_file -- common/autotest_common.sh@976 -- # wait 85798 00:23:11.773 10:46:37 keyring_file -- keyring/file.sh@21 -- # killprocess 85518 00:23:11.773 10:46:37 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85518 ']' 00:23:11.773 10:46:37 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85518 00:23:11.773 10:46:37 keyring_file -- common/autotest_common.sh@957 -- # uname 00:23:11.773 10:46:37 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:11.773 10:46:37 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85518 00:23:11.773 killing process with pid 85518 00:23:11.773 10:46:37 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:11.773 10:46:37 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:11.773 10:46:37 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85518' 00:23:11.773 10:46:37 keyring_file -- common/autotest_common.sh@971 -- # kill 85518 00:23:11.773 10:46:37 keyring_file -- common/autotest_common.sh@976 -- # wait 85518 00:23:12.032 00:23:12.032 real 0m17.427s 00:23:12.032 user 0m43.932s 00:23:12.032 sys 0m3.122s 00:23:12.032 10:46:37 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:12.032 10:46:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:12.032 ************************************ 00:23:12.032 END TEST keyring_file 00:23:12.032 ************************************ 00:23:12.290 10:46:37 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:23:12.290 10:46:37 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:12.290 10:46:37 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:12.290 10:46:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:12.291 10:46:37 -- common/autotest_common.sh@10 -- # set +x 00:23:12.291 ************************************ 00:23:12.291 START TEST keyring_linux 00:23:12.291 ************************************ 00:23:12.291 10:46:37 keyring_linux -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:12.291 Joined session keyring: 912436774 00:23:12.291 * Looking for test storage... 00:23:12.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:12.291 10:46:37 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:12.291 10:46:37 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:23:12.291 10:46:37 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:12.291 10:46:37 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@345 -- # : 1 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@368 -- # return 0 00:23:12.291 10:46:37 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.291 10:46:37 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:12.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.291 --rc genhtml_branch_coverage=1 00:23:12.291 --rc genhtml_function_coverage=1 00:23:12.291 --rc genhtml_legend=1 00:23:12.291 --rc geninfo_all_blocks=1 00:23:12.291 --rc geninfo_unexecuted_blocks=1 00:23:12.291 00:23:12.291 ' 00:23:12.291 10:46:37 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:12.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.291 --rc genhtml_branch_coverage=1 00:23:12.291 --rc genhtml_function_coverage=1 00:23:12.291 --rc genhtml_legend=1 00:23:12.291 --rc geninfo_all_blocks=1 00:23:12.291 --rc geninfo_unexecuted_blocks=1 00:23:12.291 00:23:12.291 ' 00:23:12.291 10:46:37 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:12.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.291 --rc genhtml_branch_coverage=1 00:23:12.291 --rc genhtml_function_coverage=1 00:23:12.291 --rc genhtml_legend=1 00:23:12.291 --rc geninfo_all_blocks=1 00:23:12.291 --rc geninfo_unexecuted_blocks=1 00:23:12.291 00:23:12.291 ' 00:23:12.291 10:46:37 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:12.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.291 --rc genhtml_branch_coverage=1 00:23:12.291 --rc genhtml_function_coverage=1 00:23:12.291 --rc genhtml_legend=1 00:23:12.291 --rc geninfo_all_blocks=1 00:23:12.291 --rc geninfo_unexecuted_blocks=1 00:23:12.291 00:23:12.291 ' 00:23:12.291 10:46:37 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:12.291 10:46:37 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:50e4d619-cecf-4dd2-989d-1336dee31d8f 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=50e4d619-cecf-4dd2-989d-1336dee31d8f 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.291 10:46:37 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.291 10:46:37 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.291 10:46:37 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.291 10:46:37 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.291 10:46:37 keyring_linux -- paths/export.sh@5 -- # export PATH 00:23:12.291 10:46:37 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.291 10:46:37 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:12.291 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:12.550 10:46:37 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:12.550 10:46:37 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:12.550 10:46:37 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:23:12.550 10:46:37 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:23:12.550 10:46:37 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:23:12.550 10:46:37 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@733 -- # python - 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:23:12.550 /tmp/:spdk-test:key0 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:23:12.550 10:46:37 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:23:12.550 10:46:37 keyring_linux -- nvmf/common.sh@733 -- # python - 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:23:12.550 /tmp/:spdk-test:key1 00:23:12.550 10:46:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:23:12.550 10:46:37 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85924 00:23:12.550 10:46:37 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:12.550 10:46:37 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85924 00:23:12.550 10:46:37 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85924 ']' 00:23:12.551 10:46:37 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.551 10:46:37 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:12.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.551 10:46:37 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.551 10:46:37 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:12.551 10:46:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:12.551 [2024-11-15 10:46:37.974580] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:23:12.551 [2024-11-15 10:46:37.974673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85924 ] 00:23:12.810 [2024-11-15 10:46:38.115828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.810 [2024-11-15 10:46:38.180958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.810 [2024-11-15 10:46:38.256981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:13.745 10:46:38 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:13.745 10:46:38 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:23:13.745 10:46:38 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:23:13.745 10:46:38 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.745 10:46:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:13.745 [2024-11-15 10:46:38.981300] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.745 null0 00:23:13.745 [2024-11-15 10:46:39.013242] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.745 [2024-11-15 10:46:39.013449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:13.745 10:46:39 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.745 10:46:39 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:23:13.745 136197249 00:23:13.745 10:46:39 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:23:13.745 199984965 00:23:13.745 10:46:39 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85939 00:23:13.745 10:46:39 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:23:13.745 10:46:39 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85939 /var/tmp/bperf.sock 00:23:13.745 10:46:39 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85939 ']' 00:23:13.745 10:46:39 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:13.745 10:46:39 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:13.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:13.745 10:46:39 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:13.745 10:46:39 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:13.745 10:46:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:13.745 [2024-11-15 10:46:39.096543] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:23:13.745 [2024-11-15 10:46:39.096638] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85939 ] 00:23:14.005 [2024-11-15 10:46:39.247363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.005 [2024-11-15 10:46:39.324540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.005 10:46:39 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:14.005 10:46:39 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:23:14.005 10:46:39 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:23:14.005 10:46:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:23:14.264 10:46:39 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:23:14.264 10:46:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:14.522 [2024-11-15 10:46:39.974250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:14.781 10:46:40 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:14.781 10:46:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:15.039 [2024-11-15 10:46:40.281647] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.039 nvme0n1 00:23:15.039 10:46:40 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:23:15.039 10:46:40 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:23:15.039 10:46:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:15.039 10:46:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:15.039 10:46:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:15.039 10:46:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:15.298 10:46:40 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:23:15.298 10:46:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:15.298 10:46:40 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:23:15.298 10:46:40 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:23:15.298 10:46:40 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:23:15.298 10:46:40 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:15.298 10:46:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:15.554 10:46:40 keyring_linux -- keyring/linux.sh@25 -- # sn=136197249 00:23:15.554 10:46:40 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:23:15.554 10:46:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:15.554 10:46:40 keyring_linux -- keyring/linux.sh@26 -- # [[ 136197249 == \1\3\6\1\9\7\2\4\9 ]] 00:23:15.554 10:46:40 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 136197249 00:23:15.554 10:46:40 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:23:15.554 10:46:40 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:15.812 Running I/O for 1 seconds... 00:23:16.825 11746.00 IOPS, 45.88 MiB/s 00:23:16.825 Latency(us) 00:23:16.825 [2024-11-15T10:46:42.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.825 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:16.825 nvme0n1 : 1.01 11749.72 45.90 0.00 0.00 10831.43 5362.04 15013.70 00:23:16.825 [2024-11-15T10:46:42.323Z] =================================================================================================================== 00:23:16.825 [2024-11-15T10:46:42.323Z] Total : 11749.72 45.90 0.00 0.00 10831.43 5362.04 15013.70 00:23:16.825 { 00:23:16.825 "results": [ 00:23:16.825 { 00:23:16.825 "job": "nvme0n1", 00:23:16.825 "core_mask": "0x2", 00:23:16.825 "workload": "randread", 00:23:16.825 "status": "finished", 00:23:16.825 "queue_depth": 128, 00:23:16.825 "io_size": 4096, 00:23:16.825 "runtime": 1.010577, 00:23:16.825 "iops": 11749.723177946857, 00:23:16.825 "mibps": 45.89735616385491, 00:23:16.825 "io_failed": 0, 00:23:16.825 "io_timeout": 0, 00:23:16.825 "avg_latency_us": 10831.429108671353, 00:23:16.825 "min_latency_us": 5362.036363636364, 00:23:16.825 "max_latency_us": 15013.701818181818 00:23:16.825 } 00:23:16.825 ], 00:23:16.825 "core_count": 1 00:23:16.825 } 00:23:16.825 10:46:42 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:16.825 10:46:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:17.083 10:46:42 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:23:17.083 10:46:42 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:23:17.083 10:46:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:17.083 10:46:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:17.083 10:46:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:17.083 10:46:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:17.341 10:46:42 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:23:17.341 10:46:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:17.341 10:46:42 keyring_linux -- keyring/linux.sh@23 -- # return 00:23:17.341 10:46:42 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:17.341 10:46:42 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:23:17.341 10:46:42 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:17.341 10:46:42 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:23:17.341 10:46:42 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.341 10:46:42 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:23:17.341 10:46:42 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.341 10:46:42 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:17.341 10:46:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:17.599 [2024-11-15 10:46:43.077418] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:17.599 [2024-11-15 10:46:43.077814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e25d0 (107): Transport endpoint is not connected 00:23:17.600 [2024-11-15 10:46:43.078805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e25d0 (9): Bad file descriptor 00:23:17.600 [2024-11-15 10:46:43.079801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:23:17.600 [2024-11-15 10:46:43.079823] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:17.600 [2024-11-15 10:46:43.079834] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:23:17.600 [2024-11-15 10:46:43.079845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:23:17.600 request: 00:23:17.600 { 00:23:17.600 "name": "nvme0", 00:23:17.600 "trtype": "tcp", 00:23:17.600 "traddr": "127.0.0.1", 00:23:17.600 "adrfam": "ipv4", 00:23:17.600 "trsvcid": "4420", 00:23:17.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:17.600 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:17.600 "prchk_reftag": false, 00:23:17.600 "prchk_guard": false, 00:23:17.600 "hdgst": false, 00:23:17.600 "ddgst": false, 00:23:17.600 "psk": ":spdk-test:key1", 00:23:17.600 "allow_unrecognized_csi": false, 00:23:17.600 "method": "bdev_nvme_attach_controller", 00:23:17.600 "req_id": 1 00:23:17.600 } 00:23:17.600 Got JSON-RPC error response 00:23:17.600 response: 00:23:17.600 { 00:23:17.600 "code": -5, 00:23:17.600 "message": "Input/output error" 00:23:17.600 } 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@33 -- # sn=136197249 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 136197249 00:23:17.858 1 links removed 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@33 -- # sn=199984965 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 199984965 00:23:17.858 1 links removed 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85939 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85939 ']' 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85939 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85939 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:17.858 killing process with pid 85939 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85939' 00:23:17.858 Received shutdown signal, test time was about 1.000000 seconds 00:23:17.858 00:23:17.858 Latency(us) 00:23:17.858 [2024-11-15T10:46:43.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.858 [2024-11-15T10:46:43.356Z] =================================================================================================================== 00:23:17.858 [2024-11-15T10:46:43.356Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@971 -- # kill 85939 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@976 -- # wait 85939 00:23:17.858 10:46:43 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85924 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85924 ']' 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85924 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:17.858 10:46:43 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85924 00:23:18.116 10:46:43 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:18.116 10:46:43 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:18.116 killing process with pid 85924 00:23:18.116 10:46:43 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85924' 00:23:18.116 10:46:43 keyring_linux -- common/autotest_common.sh@971 -- # kill 85924 00:23:18.116 10:46:43 keyring_linux -- common/autotest_common.sh@976 -- # wait 85924 00:23:18.375 00:23:18.375 real 0m6.192s 00:23:18.375 user 0m12.129s 00:23:18.375 sys 0m1.557s 00:23:18.375 10:46:43 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:18.375 ************************************ 00:23:18.375 END TEST keyring_linux 00:23:18.375 ************************************ 00:23:18.375 10:46:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:18.375 10:46:43 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:23:18.375 10:46:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:18.375 10:46:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:18.375 10:46:43 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:23:18.375 10:46:43 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:23:18.375 10:46:43 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:23:18.375 10:46:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:18.375 10:46:43 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:18.375 10:46:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:18.375 10:46:43 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:23:18.375 10:46:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:18.375 10:46:43 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:23:18.375 10:46:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:18.375 10:46:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:18.375 10:46:43 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:23:18.375 10:46:43 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:23:18.375 10:46:43 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:23:18.375 10:46:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:18.375 10:46:43 -- common/autotest_common.sh@10 -- # set +x 00:23:18.375 10:46:43 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:23:18.375 10:46:43 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:23:18.375 10:46:43 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:23:18.375 10:46:43 -- common/autotest_common.sh@10 -- # set +x 00:23:20.298 INFO: APP EXITING 00:23:20.298 INFO: killing all VMs 00:23:20.298 INFO: killing vhost app 00:23:20.298 INFO: EXIT DONE 00:23:20.865 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:20.865 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:20.865 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:21.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:21.431 Cleaning 00:23:21.431 Removing: /var/run/dpdk/spdk0/config 00:23:21.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:21.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:21.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:21.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:21.431 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:21.431 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:21.431 Removing: /var/run/dpdk/spdk1/config 00:23:21.431 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:21.431 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:21.431 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:21.431 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:21.431 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:21.431 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:21.431 Removing: /var/run/dpdk/spdk2/config 00:23:21.431 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:21.431 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:21.431 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:21.431 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:21.431 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:21.690 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:21.690 Removing: /var/run/dpdk/spdk3/config 00:23:21.690 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:21.690 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:21.690 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:21.690 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:21.690 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:21.690 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:21.690 Removing: /var/run/dpdk/spdk4/config 00:23:21.690 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:21.690 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:21.690 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:21.690 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:21.690 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:21.690 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:21.690 Removing: /dev/shm/nvmf_trace.0 00:23:21.690 Removing: /dev/shm/spdk_tgt_trace.pid56775 00:23:21.690 Removing: /var/run/dpdk/spdk0 00:23:21.690 Removing: /var/run/dpdk/spdk1 00:23:21.690 Removing: /var/run/dpdk/spdk2 00:23:21.690 Removing: /var/run/dpdk/spdk3 00:23:21.690 Removing: /var/run/dpdk/spdk4 00:23:21.690 Removing: /var/run/dpdk/spdk_pid56622 00:23:21.690 Removing: /var/run/dpdk/spdk_pid56775 00:23:21.690 Removing: /var/run/dpdk/spdk_pid56973 00:23:21.690 Removing: /var/run/dpdk/spdk_pid57061 00:23:21.690 Removing: /var/run/dpdk/spdk_pid57081 00:23:21.690 Removing: /var/run/dpdk/spdk_pid57190 00:23:21.690 Removing: /var/run/dpdk/spdk_pid57209 00:23:21.690 Removing: /var/run/dpdk/spdk_pid57348 00:23:21.690 Removing: /var/run/dpdk/spdk_pid57549 00:23:21.690 Removing: /var/run/dpdk/spdk_pid57703 00:23:21.690 Removing: /var/run/dpdk/spdk_pid57781 00:23:21.690 Removing: /var/run/dpdk/spdk_pid57865 00:23:21.690 Removing: /var/run/dpdk/spdk_pid57951 00:23:21.690 Removing: /var/run/dpdk/spdk_pid58034 00:23:21.690 Removing: /var/run/dpdk/spdk_pid58067 00:23:21.690 Removing: /var/run/dpdk/spdk_pid58097 00:23:21.690 Removing: /var/run/dpdk/spdk_pid58172 00:23:21.690 Removing: /var/run/dpdk/spdk_pid58266 00:23:21.690 Removing: /var/run/dpdk/spdk_pid58710 00:23:21.690 Removing: /var/run/dpdk/spdk_pid58759 00:23:21.690 Removing: /var/run/dpdk/spdk_pid58798 00:23:21.690 Removing: /var/run/dpdk/spdk_pid58814 00:23:21.690 Removing: /var/run/dpdk/spdk_pid58887 00:23:21.690 Removing: /var/run/dpdk/spdk_pid58903 00:23:21.690 Removing: /var/run/dpdk/spdk_pid58975 00:23:21.690 Removing: /var/run/dpdk/spdk_pid58984 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59029 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59047 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59093 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59111 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59241 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59277 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59354 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59699 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59711 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59742 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59761 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59771 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59801 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59809 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59830 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59849 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59868 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59888 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59908 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59916 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59937 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59956 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59975 00:23:21.690 Removing: /var/run/dpdk/spdk_pid59985 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60012 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60025 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60041 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60077 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60090 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60121 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60193 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60221 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60231 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60259 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60274 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60282 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60324 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60339 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60368 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60378 00:23:21.690 Removing: /var/run/dpdk/spdk_pid60387 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60397 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60412 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60421 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60431 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60440 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60469 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60496 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60505 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60539 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60547 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60556 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60596 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60608 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60640 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60642 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60655 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60661 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60670 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60678 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60685 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60698 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60780 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60822 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60946 00:23:21.949 Removing: /var/run/dpdk/spdk_pid60981 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61027 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61042 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61058 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61078 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61114 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61131 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61209 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61225 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61273 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61347 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61411 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61440 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61534 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61582 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61619 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61841 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61944 00:23:21.949 Removing: /var/run/dpdk/spdk_pid61978 00:23:21.949 Removing: /var/run/dpdk/spdk_pid62002 00:23:21.949 Removing: /var/run/dpdk/spdk_pid62041 00:23:21.949 Removing: /var/run/dpdk/spdk_pid62081 00:23:21.949 Removing: /var/run/dpdk/spdk_pid62114 00:23:21.949 Removing: /var/run/dpdk/spdk_pid62145 00:23:21.949 Removing: /var/run/dpdk/spdk_pid62545 00:23:21.949 Removing: /var/run/dpdk/spdk_pid62583 00:23:21.949 Removing: /var/run/dpdk/spdk_pid62927 00:23:21.949 Removing: /var/run/dpdk/spdk_pid63393 00:23:21.949 Removing: /var/run/dpdk/spdk_pid63675 00:23:21.949 Removing: /var/run/dpdk/spdk_pid64532 00:23:21.949 Removing: /var/run/dpdk/spdk_pid65445 00:23:21.949 Removing: /var/run/dpdk/spdk_pid65568 00:23:21.949 Removing: /var/run/dpdk/spdk_pid65630 00:23:21.949 Removing: /var/run/dpdk/spdk_pid67038 00:23:21.949 Removing: /var/run/dpdk/spdk_pid67350 00:23:21.949 Removing: /var/run/dpdk/spdk_pid71174 00:23:21.949 Removing: /var/run/dpdk/spdk_pid71533 00:23:21.949 Removing: /var/run/dpdk/spdk_pid71642 00:23:21.949 Removing: /var/run/dpdk/spdk_pid71769 00:23:21.949 Removing: /var/run/dpdk/spdk_pid71791 00:23:21.949 Removing: /var/run/dpdk/spdk_pid71821 00:23:21.949 Removing: /var/run/dpdk/spdk_pid71842 00:23:21.949 Removing: /var/run/dpdk/spdk_pid71953 00:23:21.949 Removing: /var/run/dpdk/spdk_pid72081 00:23:21.949 Removing: /var/run/dpdk/spdk_pid72218 00:23:21.949 Removing: /var/run/dpdk/spdk_pid72305 00:23:21.949 Removing: /var/run/dpdk/spdk_pid72492 00:23:21.949 Removing: /var/run/dpdk/spdk_pid72573 00:23:21.949 Removing: /var/run/dpdk/spdk_pid72657 00:23:21.949 Removing: /var/run/dpdk/spdk_pid73005 00:23:21.949 Removing: /var/run/dpdk/spdk_pid73428 00:23:21.949 Removing: /var/run/dpdk/spdk_pid73429 00:23:21.949 Removing: /var/run/dpdk/spdk_pid73430 00:23:21.949 Removing: /var/run/dpdk/spdk_pid73689 00:23:21.949 Removing: /var/run/dpdk/spdk_pid73956 00:23:21.949 Removing: /var/run/dpdk/spdk_pid74336 00:23:21.949 Removing: /var/run/dpdk/spdk_pid74347 00:23:21.949 Removing: /var/run/dpdk/spdk_pid74664 00:23:21.949 Removing: /var/run/dpdk/spdk_pid74685 00:23:21.949 Removing: /var/run/dpdk/spdk_pid74699 00:23:21.949 Removing: /var/run/dpdk/spdk_pid74725 00:23:21.949 Removing: /var/run/dpdk/spdk_pid74735 00:23:21.949 Removing: /var/run/dpdk/spdk_pid75085 00:23:21.949 Removing: /var/run/dpdk/spdk_pid75132 00:23:21.949 Removing: /var/run/dpdk/spdk_pid75481 00:23:22.273 Removing: /var/run/dpdk/spdk_pid75679 00:23:22.273 Removing: /var/run/dpdk/spdk_pid76137 00:23:22.273 Removing: /var/run/dpdk/spdk_pid76686 00:23:22.273 Removing: /var/run/dpdk/spdk_pid77571 00:23:22.273 Removing: /var/run/dpdk/spdk_pid78219 00:23:22.273 Removing: /var/run/dpdk/spdk_pid78221 00:23:22.273 Removing: /var/run/dpdk/spdk_pid80250 00:23:22.273 Removing: /var/run/dpdk/spdk_pid80297 00:23:22.273 Removing: /var/run/dpdk/spdk_pid80350 00:23:22.273 Removing: /var/run/dpdk/spdk_pid80411 00:23:22.273 Removing: /var/run/dpdk/spdk_pid80525 00:23:22.273 Removing: /var/run/dpdk/spdk_pid80572 00:23:22.273 Removing: /var/run/dpdk/spdk_pid80629 00:23:22.273 Removing: /var/run/dpdk/spdk_pid80682 00:23:22.273 Removing: /var/run/dpdk/spdk_pid81049 00:23:22.273 Removing: /var/run/dpdk/spdk_pid82266 00:23:22.273 Removing: /var/run/dpdk/spdk_pid82405 00:23:22.273 Removing: /var/run/dpdk/spdk_pid82648 00:23:22.273 Removing: /var/run/dpdk/spdk_pid83258 00:23:22.273 Removing: /var/run/dpdk/spdk_pid83418 00:23:22.273 Removing: /var/run/dpdk/spdk_pid83575 00:23:22.273 Removing: /var/run/dpdk/spdk_pid83672 00:23:22.273 Removing: /var/run/dpdk/spdk_pid83833 00:23:22.273 Removing: /var/run/dpdk/spdk_pid83942 00:23:22.273 Removing: /var/run/dpdk/spdk_pid84651 00:23:22.273 Removing: /var/run/dpdk/spdk_pid84692 00:23:22.273 Removing: /var/run/dpdk/spdk_pid84726 00:23:22.273 Removing: /var/run/dpdk/spdk_pid84982 00:23:22.273 Removing: /var/run/dpdk/spdk_pid85013 00:23:22.273 Removing: /var/run/dpdk/spdk_pid85047 00:23:22.273 Removing: /var/run/dpdk/spdk_pid85518 00:23:22.273 Removing: /var/run/dpdk/spdk_pid85535 00:23:22.273 Removing: /var/run/dpdk/spdk_pid85798 00:23:22.273 Removing: /var/run/dpdk/spdk_pid85924 00:23:22.273 Removing: /var/run/dpdk/spdk_pid85939 00:23:22.273 Clean 00:23:22.273 10:46:47 -- common/autotest_common.sh@1451 -- # return 0 00:23:22.273 10:46:47 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:23:22.274 10:46:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:22.274 10:46:47 -- common/autotest_common.sh@10 -- # set +x 00:23:22.274 10:46:47 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:23:22.274 10:46:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:22.274 10:46:47 -- common/autotest_common.sh@10 -- # set +x 00:23:22.274 10:46:47 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:22.274 10:46:47 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:22.274 10:46:47 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:22.274 10:46:47 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:23:22.274 10:46:47 -- spdk/autotest.sh@394 -- # hostname 00:23:22.274 10:46:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:22.537 geninfo: WARNING: invalid characters removed from testname! 00:23:54.602 10:47:14 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:54.602 10:47:18 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:56.502 10:47:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:59.054 10:47:24 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:02.341 10:47:27 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:04.901 10:47:30 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:08.186 10:47:32 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:08.186 10:47:32 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:08.186 10:47:32 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:08.186 10:47:32 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:08.186 10:47:32 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:08.186 10:47:32 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:08.186 + [[ -n 5263 ]] 00:24:08.186 + sudo kill 5263 00:24:08.194 [Pipeline] } 00:24:08.208 [Pipeline] // timeout 00:24:08.214 [Pipeline] } 00:24:08.228 [Pipeline] // stage 00:24:08.231 [Pipeline] } 00:24:08.246 [Pipeline] // catchError 00:24:08.255 [Pipeline] stage 00:24:08.258 [Pipeline] { (Stop VM) 00:24:08.269 [Pipeline] sh 00:24:08.547 + vagrant halt 00:24:12.746 ==> default: Halting domain... 00:24:18.025 [Pipeline] sh 00:24:18.306 + vagrant destroy -f 00:24:21.598 ==> default: Removing domain... 00:24:21.869 [Pipeline] sh 00:24:22.152 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:24:22.161 [Pipeline] } 00:24:22.177 [Pipeline] // stage 00:24:22.183 [Pipeline] } 00:24:22.197 [Pipeline] // dir 00:24:22.203 [Pipeline] } 00:24:22.218 [Pipeline] // wrap 00:24:22.226 [Pipeline] } 00:24:22.241 [Pipeline] // catchError 00:24:22.251 [Pipeline] stage 00:24:22.253 [Pipeline] { (Epilogue) 00:24:22.267 [Pipeline] sh 00:24:22.548 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:29.153 [Pipeline] catchError 00:24:29.155 [Pipeline] { 00:24:29.170 [Pipeline] sh 00:24:29.458 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:29.717 Artifacts sizes are good 00:24:29.725 [Pipeline] } 00:24:29.738 [Pipeline] // catchError 00:24:29.748 [Pipeline] archiveArtifacts 00:24:29.755 Archiving artifacts 00:24:29.913 [Pipeline] cleanWs 00:24:29.945 [WS-CLEANUP] Deleting project workspace... 00:24:29.945 [WS-CLEANUP] Deferred wipeout is used... 00:24:29.953 [WS-CLEANUP] done 00:24:29.955 [Pipeline] } 00:24:29.969 [Pipeline] // stage 00:24:29.973 [Pipeline] } 00:24:29.986 [Pipeline] // node 00:24:29.991 [Pipeline] End of Pipeline 00:24:30.033 Finished: SUCCESS